AI News, szagoruyko/wide-residual-networks
- On 26. september 2018
- By Read More
This code was used for experiments with Wide Residual Networks (BMVC 2016) http://arxiv.org/abs/1605.07146 by Sergey Zagoruyko and Nikos Komodakis.
of improved accuracy costs nearly doubling the number of layers, and so training
you're comparing your method against WRN, please report correct preprocessing numbers because they give substantially different results.
ImageNet WRN-50-2-bottleneck (ResNet-50 with wider inner bottleneck 3x3 convolution) is significantly faster than ResNet-152 and has better accuracy;
on COCO wide ResNet with 34 layers outperforms even Inception-v4-based Fast-RCNN model in single model performance.
Test error (%, flip/translation augmentation, meanstd normalization, median of 5 runs) on CIFAR: Single-time runs (meanstd normalization): See http://arxiv.org/abs/1605.07146 for details.
Follow instructions here and run: For visualizing training curves we used ipython notebook with pandas and bokeh.
We provide the following: To whiten CIFAR-10 and CIFAR-100 we used the following scripts https://github.com/lisa-lab/pylearn2/blob/master/pylearn2/scripts/datasets/make_cifar10_gcn_whitened.py and then converted to torch using https://gist.github.com/szagoruyko/ad2977e4b8dceb64c68ea07f6abf397b and npy to torch converter https://github.com/htwaijry/npy4th.
is saved to logs/wide-resnet_$RANDOM$RANDOM folder with json entries for each epoch and can be visualized with itorch/ipython later.
To reduce memory usage we use @fmassa's optimize-net, which automatically shares output and gradient tensors between modules.
- On 22. januar 2021
Lecture 9 | CNN Architectures
In Lecture 9 we discuss some common architectures for convolutional neural networks. We discuss architectures which performed well in the ImageNet ...
Lesson 2: Deep Learning 2018
NB: Please go to to view this video since there is important updated information there. If you have questions, use the forums at ..
Lecture 1 | Introduction to Convolutional Neural Networks for Visual Recognition
Lecture 1 gives an introduction to the field of computer vision, discussing its history and key challenges. We emphasize that computer vision encompasses a ...
MIT 6.S094: Introduction to Deep Learning and Self-Driving Cars
This is lecture 1 of course 6.S094: Deep Learning for Self-Driving Cars taught in Winter 2017. Course website: Lecture 1 slides: ..
Dr. Yann LeCun, "How Could Machines Learn as Efficiently as Animals and Humans?"
Brown Statistics, NESS Seminar and Charles K. Colver Lectureship Series Deep learning has caused revolutions in computer perception and natural language ...
Lec07 Introduction to Deep Learning with Neural Networks (Part 2)
History of deep learning as it evolved over ages, different families of deep neural networks, practical scenarios of use, current challenges in the field.
Lecture 5.2: Andrei Barbu - From Language to Vision and Back Again
MIT RES.9-003 Brains, Minds and Machines Summer Course, Summer 2015 View the complete course: Instructor: Andrei ..
TensorFlow Dev Summit 2018 - Livestream
TensorFlow Dev Summit 2018 All Sessions playlist → Live from Mountain View, CA! Join the TensorFlow team as they host the second ..