AI News, Microsoft Reconsidering AI Ethics Review Plan artificial intelligence

new course: A Code-First Introduction to Natural Language Processing

Today we are releasing a new course (taught by me), Deep Learning from the Foundations, which shows how to build a state of the art deep learning model from scratch.

It takes you all the way from the foundations of implementing matrix multiplication and back-propogation, through to high performance mixed-precision training, to the latest neural network architectures and learning techniques, and everything in between.

It covers many of the most important academic papers that form the foundations of modern deep learning, using “code-first” teaching, where each method is implemented from scratch in python and explained in detail (in the process, we’ll discuss many important software engineering techniques too).

The whole course, covering around 15 hours of teaching and dozens of interactive notebooks, is entirely free (and ad-free), provided as a service to the community.

It is the latest in our ongoing commitment to providing free, practical, cutting-edge education for deep learning practitioners and educators—a commitment that has been appreciated by hundreds of thousands of students, led to The Economist saying “Demystifying the subject, to make it accessible to anyone who wants to learn how to build AI software, is the aim of Jeremy Howard… It is working”, and to CogX awarding the Outstanding Contribution in AI award.

huge amount of work went into the last two lessons—not only did the team need to create new teaching materials covering both TensorFlow and Swift, but also create a new fastai Swift library from scratch, and add a lot of new functionality (and squash a few bugs!) in Swift for TensorFlow.

It was a very close collaboration between Google Brain’s Swift for TensorFlow group and, and wouldn’t have been possible without the passion, commitment, and expertise of the whole team, from both Google and

We’ll then use this to create a basic neural net forward pass, including a first look at how neural networks are initialized (a topic we’ll be going into in great depth in the coming lessons).

Finally, we develop a new kind of normalization layer to overcome these problems, compare it to previously published approaches, and see some very encouraging results.

We’ll look closely at each step: Next up, we build a new StatefulOptimizer class, and show that nearly all optimizers used in modern deep learning training are just special cases of this one class.

We develop a new GPU-based data augmentation approach which we find speeds things up quite dramatically, and allows us to then add more sophisticated warp-based transformations.

We implement some really important training techniques in lesson 12, all using callbacks: We also implement xresnet, which is a tweaked version of the classic resnet architecture that provides substantial improvements.

Finally, we show how to implement ULMFiT from scratch, including building an LSTM RNN, and looking at the various steps necessary to process natural language data to allow it to be passed to a neural network.

He shares insights on its development history, and why he thinks it’s a great fit for deep learning and numeric programming more generally.

Thanks to the compilation and language design, basic code runs very fast indeed - about 8000 times faster than Python in the simple example Chris showed in class.

He shows how to use this to quickly and easily get high performance code by interfacing with existing C libraries, using Sox audio processing, and VIPS and OpenCV image processing as complete working examples.

So be sure to study the notebooks to see lots more Swift tricks… We’ll be releasing even more lessons in the coming months and adding them to an attached course we’ll be calling Applications of Deep Learning.

Ethics of AI

Detecting people, optimising logistics, providing translations, composing art: artificial intelligence (AI) systems are not only changing what and how we are doing ...

The Ethics and Governance of AI opening event, February 3, 2018

Chapter 1: 0:04 - Joi Ito Chapter 2: 1:03:27 - Jonathan Zittrain Chapter 3: 2:32:59 - Panel 1: Joi Ito moderates a panel with Pratik Shah, Karthik Dinakar, and ...

AI in the Admin State | AI and Biomedical Resource Creation, Biopharmaceuticals and Digital Health

Moderators: Nita Farahany, Duke Law School, Duke Initiative for Science & Society Arti Rai, Duke Law School, The Center for Innovation Policy at Duke Law ...

January 23: Algorithmic Decision-Making and Accountability

Jeff Larson, Safiya Noble, and Nikhyl Singhal join the Stanford teaching team, Rob Reich, Mehran Sahami, Jeremy Weinstein, and Hilary Cohen, to illuminate ...

Sundance Film Festival panel: Imagining work in an AI integrated future

Presented by Unity and Dell at the 2018 Sundance Film Festival, this panel discussed how emerging technologies may fundamentally change the nature of work ...

AI Responsibility (Cloud Next '19)

We recognize that powerful AI technology raises equally powerful questions about its use and may have a significant impact on society for many years to come.

Microsoft AI: Empowering us all | Tim O’Brien

We are inspired by those who endeavor to think big, dream bold, and advance our world. Let us share with you how we can help to empower business by ...

AI in Public Sector: Tool for inclusion or exclusion?

March 1, 2018 Panel Event AI in the Public Sector: Tool for inclusion or exclusion? This panel was organized by the Taskar Center for Accessible Technology as ...

Beth Altringer: AI and Ethical Design

Beth Altringer of Harvard's John A. Paulson School of Engineering and Applied Sciences and the Graduate School of Design discusses how to design AI ...

Ross Intelligence: AI in the Legal Profession, AI and Legal Research

Ross Intelligence: AI in the Legal Profession; AI and Legal Research Stephen Turner Lawyers of Tomorrow: Lawyers of Tomorrow Podcast Welcome to the ...