AI News, [D] Anyone have the new 2080? Are there any benchmarks of CNNs ... artificial intelligence

- On 2. januar 2019
- By Read More
10 Lessons Learned From Participating in Google AI Challenge
Disclaimers: I will present only a portion of the code I wrote for this competition, my teammates are absolutely not responsible for my awful and buggy code.
It contains 50 millions images with details for each image: in which category the image falls, a unique identifier (key_id), in which country the drawing come from (country) and the strokes points (drawing) to reproduce the drawing.
But with that solution, with such a big dataset, I faced another challenge: my Linux filesystem wasn’t configured to support this huge number of inodes (50 millions files added to the filesystem thus 50 millions new inodes created) and ended up with a filesystem full without using all the GB available.
My initial intention was to implement parallel workers accessing the database in read-only mode by providing one SQLite file object per worker, but before going that way I first tried using a global lock on a single SQLite object and it surprisingly gave immediate decent results (GPU used at 98%), that’s why I did not even bother to improve it.
To do so I get inspired from Beluga’s kernel, which serve as a baseline for many competitors: But I noticed some issues with this piece of code: Following improvements were done: Note: I did not encode velocity or time provided, it would probably add more information that the CNN could use.
I made this decision based on the fact the dataset with its 50 millions images is pretty huge and imagenet images (real world pictures) are pretty far from a 10-seconds drawn sketch.
The model presented is a stock torchvision model, but with a custom head similar to the original, but in which I replaced the final pooling layer with an adaptive counterpart (AdaptiveAvgPool2d) to handle gracefully images of different resolutions.
- On 1. marts 2021
Research at NVIDIA: Transforming Standard Video Into Slow Motion with AI
Researchers from NVIDIA developed a deep learning-based system that can produce high-quality slow-motion videos from a 30-frame-per-second video, ...
Adapting Deep Learning to New Data Using ORNL's Titan Supercomputer
In this video from SC17, Travis Johnston from ORNL presents: Adapting Deep Learning to New Data Using ORNL's Titan Supercomputer. "There has been a ...