AI News, Baidu Apologizes for Touting Dubious Artificial-Intelligence Feat

Baidu Apologizes for Touting Dubious Artificial-Intelligence Feat

On Tuesday, however, the volunteer computer scientists who administer the test reported that Baidu had stacked the deck by taking the test far more frequently than allowed.

By taking the test so many times, Baidu’s engineers could have gained an advantage by tuning their software to information that was supposed to be unfamiliar.

“This is pretty bad, and it is exactly why there is a held-out test set for the competition that is hosted on a separate server with limited access,” said Matthew Zeiler, chief executive of AI company Clarifai Inc.

“If you know the test set, then you can tweak your parameters of your model however you want to optimize the test set.” The organizers have asked Baidu to stop submitting ImageNet results for the next year.

Why and How Baidu Cheated an Artificial Intelligence Test

The sport of training software to act intelligently just got its first cheating scandal.

Baidu, Google, Facebook, and other major computing companies have spent heavily in recent years to build research groups dedicated to deep learning, an approach to building machine learning software that has made great strides in speech and image recognition.

A handful of standardized tests developed in academia are the currency by which these research groups compare one another’s progress and promote their achievements to the public.

Baidu has admitted that it used multiple email accounts to test its code roughly 200 times in just under six months – over four times what the rules allow.

On top of that, testing slightly different code over many tests could help a research team optimize its software for peculiarities of the collection of validation images that aren’t reflected in real world photos.

That Baidu and others continue to trumpet their results all the same - and may even be willing to break the rules - suggest that being the best at machine learning matters to them very much indeed.

Ex-Baidu Researcher Ren Wu Denies Wrongdoing

Dr. Ren Wu vigorously denies the charges of cheating that led Baidu to reportedly fire the head of its Heterogeneous Computing team after the Chinese search engine developer's supercomputer team was accused of cheating in an artificial intelligence competition.

(Wu was a speaker at last year's Enterprise HPC conference in San Diego.) Wu, a distinguished scientist at Baidu's Institute of Deep Learning, was let go after the company was disqualified from the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a standardized and independent AI test, where it allegedly created and used multiple accounts to run many more evaluations than its competitors each week.

After ILSVRC contacted Baidu to alert the company that it had vastly exceeded the allowable number of weekly submissions to the ImageNet server, Baidu began its own internal inquiry, the company said in a blog post.

We found that a team leader had directed junior engineers to submit more than two submissions per week, a breach of the current ImageNet rules,' Baidu's Heterogeneous Computing team wrote.

'Any action that runs counter to the highest standards of academic and scientific integrity, no matter how large or small, is unacceptable to us and does not reflect the culture of our company.

Our paper have five authors, and so based [on] the rule above, we should be allowed to submit around 260 times. And so, our 200 submissions were well within the 260 limits set by the rule.

according to the rules, each team – which also included Google and Microsoft – could access the database twice a week in order to finesse their image recognition algorithms but Baidu reportedly visited the database more than 200 times in six months.

'High performance computing, which enabled very aggressive data augmentation, working on higher resolution models, and being able to train large models.That is the reason of our success,' said Wu.

Why Deep Learning Is Suddenly Changing Your Life

Over the past four years, readers have doubtlessly noticed quantum leaps in the quality of a wide range of everyday technologies.

To gather up dog pictures, the app must identify anything from a Chihuahua to a German shepherd and not be tripped up if the pup is upside down or partially obscured, at the right of the frame or the left, in fog or snow, sun or shade.

Medical startups claim they’ll soon be able to use computers to read X-rays, MRIs, and CT scans more rapidly and accurately than radiologists, to diagnose cancer earlier and less invasively, and to accelerate the search for life-saving pharmaceuticals.

They’ve all been made possible by a family of artificial intelligence (AI) techniques popularly known as deep learning, though most scientists still prefer to call them by their original academic designation: deep neural networks.

Programmers have, rather, fed the computer a learning algorithm, exposed it to terabytes of data—hundreds of thousands of images or years’ worth of speech samples—to train it, and have then allowed the computer to figure out for itself how to recognize the desired objects, words, or sentences.

“You essentially have software writing software,” says Jen-Hsun Huang, CEO of graphics processing leader Nvidia nvda , which began placing a massive bet on deep learning about five years ago.

What’s changed is that today computer scientists have finally harnessed both the vast computational power and the enormous storehouses of data—images, video, audio, and text files strewn across the Internet—that, it turns out, are essential to making neural nets work well.

“We’re now living in an age,” Chen observes, “where it’s going to be mandatory for people building sophisticated software applications.” People will soon demand, he says, “ ‘Where’s your natural-language processing version?’ ‘How do I talk to your app?

The increased computational power that is making all this possible derives not only from Moore’s law but also from the realization in the late 2000s that graphics processing units (GPUs) made by Nvidia—the powerful chips that were first designed to give gamers rich, 3D visual experiences—were 20 to 50 times more efficient than traditional central processing units (CPUs) for deep-learning computations.

Its chief financial officer told investors that “the vast majority of the growth comes from deep learning by far.” The term “deep learning” came up 81 times during the 83-minute earnings call.

I think five years from now there will be a number of S&P 500 CEOs that will wish they’d started thinking earlier about their AI strategy.” Even the Internet metaphor doesn’t do justice to what AI with deep learning will mean, in Ng’s view.

ImageNet

The ILSVRC aims to 'follow in the footsteps' of the smaller-scale PASCAL VOC challenge, established in 2005, which contained only about 20,000 images and twenty object classes.[6] The ILSVRC uses a 'trimmed' list of only 1000 image categories or 'classes', including 90 of the 120 dog breeds classified by the full ImageNet schema.[6] The 2010s saw dramatic progress in image processing.

in the next couple of years, error rates fell to a few percent.[10] While the 2012 breakthrough 'combined pieces that were all there before', the dramatic quantitative improvement marked the start of an industry-wide artificial intelligence boom.[4] By 2015, researchers reported that software exceeded human ability at the narrow ILSVRC tasks.[11] However, as one of the challenge's organisers, Olga Russakovsky, pointed out in 2015, the programs only have to identify images as belonging to one of a thousand categories;

humans can recognize a larger number of categories, and also (unlike the programs) can judge the context of an image.[12] By 2014, over fifty institutions participated in the ILSVRC.[6] In 2015, Baidu scientists were banned for a year for using different accounts to greatly exceed the specified limit of two submissions per week.[13][14] Baidu later stated that it fired the team leader involved and that it would establish a scientific advisory panel.[15] In 2017, 29 of 38 competing teams got less than 5% wrong.[16] In 2017 ImageNet stated it would roll out a new, much more difficult, challenge in 2018 that involves classifying 3D objects using natural language.

Lecture 7 | Training Neural Networks II

Lecture 7 continues our discussion of practical issues for training neural networks. We discuss different update rules commonly used to optimize neural networks ...

TensorFlow Dev Summit 2018 - Livestream

TensorFlow Dev Summit 2018 All Sessions playlist → Live from Mountain View, CA! Join the TensorFlow team as they host the second ..