AI News, From the VP artificial intelligence

SEI Announces Establishment of AI Division, Names Director

Carnegie Mellon University's Software Engineering Institute has announced the establishment of a new research division dedicated to artificial intelligence (AI) engineering and named Matthew Gaston as the new division's director.

federally funded research and development center, the SEI helps government and industry organizations develop and operate software systems that are secure and reliable.

AI engineering is an emerging field of research and practice that combines the principles of systems engineering, software engineering, computer science and human-centered design to create AI systems in accordance with human needs for mission outcomes.

It is critical for the U.S. government to bring engineering discipline to AI as a key enabler for national security, and it is particularly fitting for the Software Engineering Institute to contribute to this discipline because of the university's long history of leadership in this area.'

to transform the creation of AI systems from one-time, custom-crafted solutions into repeatable, scalable and reliable programs and services that can help the DOD achieve mission success,' said Paul Nielsen, SEI director and CEO.

'Using our initial work in the Emerging Technology Center and across the SEI as a foundation, we plan to build on the strong legacy of software engineering research at the SEI, initiate exciting new projects, work closely with world-class AI researchers across CMU, and build a community of collaborators throughout government, industry and academia.'

NVIDIA CEO Speaks at UK AI Event on How AI Is Changing World

“I believe artificial intelligence will democratize technology,” Huang said during a virtual fireside chat with CogX co-founder Tabitha Goldstaub that touched on a wide range of topics, including NVIDIA’s plans to acquire Arm.

Huang referred to the $100 million Cambridge-1, an AI supercomputer focused on healthcare and digital biology due to be officially dedicated next month, as “just the first” of NVIDIA’s U.K.

The talks featuring NVIDIA leaders explored several key themes, including research and engineering excellence, nurturing world-class startups, the role of AI in fighting climate change, and how AI is being applied to advance critical sectors such as healthcare.

In addition to Huang, Katie Kallot, head of emerging areas at NVIDIA, where she oversees ecosystem, strategic alliances and developer relations for emerging markets, segments and use cases, spoke about where the next $100 billion deep learning company will come from.

The government’s current industry strategy, promulgated in 2017, includes a package of up support worth as much as £0.95 billion ($1.3 billion) for the AI sector.

The Biden Administration Launches the National Artificial Intelligence Research Resource TaskForce

Task Force members will help develop a roadmap to democratize access to research tools that will promote AI innovation and fuel economic prosperity Today, the White House Office of Science and Technology Policy (OSTP) and the National Science Foundation (NSF) announced the newly formed National Artificial Intelligence (AI) Research Resource Task Force which will write the road map for expanding access to critical resources and educational tools that will spur AI innovation and economic prosperity nationwide.

“By bringing together the nation’s foremost experts from academia, industry, and government, we will be able to chart an exciting and compelling path forward, ensuring long-term U.S. competitiveness in all fields of science and engineering and all sectors of our economy.” Representing government, higher education, and private organizations, the following technical experts will serve on the Task Force: Public input on the vision for and implementation of the NAIRR will be sought, including through a forthcoming request for information to be posted to the Federal Register.

Forward Thinking on artificial intelligence with Microsoft CTO Kevin Scott

In this episode of the McKinsey Global Institute’s Forward Thinking podcast, MGI’s James Manyika explores the implications of artificial general intelligence (AGI) for jobs, particularly in rural America, with Kevin Scott, Microsoft’s chief technology officer and author of Reprogramming the American Dream: From Rural America to Silicon Valley—Making AI Serve Us All (HarperCollins, 2020).

We’re going to spend a fair amount of time discussing your book, but first I wanted to talk about what you’re working on right now.

Kevin Scott: There are many things that we’ve been working on for the past couple of years that I’m excited about, including these large-scale computing platforms for training a new type of deep neural network model.

It’s been really thrilling to build all of the systems infrastructure to support these training computations that are absolutely enormous, and to see the progress made on these large self-supervised models and with deep reinforcement learning.

Kevin Scott: Around 2012 or so, the big revolution began happening with deep neural networks in machine learning, and these models doing supervised learning have been able to accomplish a lot in speech recognition and computer vision and a whole bunch of these perceptual domains.

We very quickly went from a plateau that we had hit with the prior set of techniques to new levels, that in many cases approximate or exceed human performance at the equivalent task.

Kevin Scott: The really interesting thing is that you don’t have the constraint of having to supply these models with large numbers of labeled training data points or examples.

I really do believe that when we’re thinking about technology, we should always be thinking about what platforms we can create that empower other people to solve the problems that they are seeing, and to help them achieve what they want to achieve.

It can’t only be a small handful of large companies, or companies that are only located in urban innovation centers, that are able to make full use of the tech that we’re developing to solve problems.

What I’ve been telling folks as I’ve talked about the book is that in 2003 or 2004, when I wrote my first real machine learning system, you really did need a graduate degree in an analytical discipline.

Because of open-source software, because we’ve thought about how to solve these problems in a platform-oriented way, because we have cloud computing infrastructure that makes training power accessible to everyone, and because you have things like YouTube and online training materials that help you more quickly understand how all of these framework pieces work, my guess is that a motivated high school student could solve the same problem in a weekend, whereas that took me six months over 14 years ago.

It’s unclear exactly how many problems of intelligence you can solve with more data and more compute, which I think is one of the reasons why it’s tricky to make accurate predictions when you get to general intelligence.

Every time that we have used AI to solve a problem that we thought was some high watermark of human intelligence, we have changed our mind about how important that watermark was.

When we were both much younger, when I was in graduate school, the problem trying to be addressed was whether we build a computer, an AI, that can beat a grand master at chess.

It really hasn’t even made a material dent in chess, other than some of the techniques that we built in our AI are now used to help humans practice to become better human chess players.

I had been living in Silicon Valley and working in the technology industry for such a long time that I really had this idea in my head that maybe these technologies weren’t going to benefit people in rural America.

They had pivoted with all the twists and turns that the economy had thrown at them and built businesses that were already using the most advanced technology that they could lay their hands on.

The reason that they are competitive in this fierce global market for manufacturing is because the automation that they can leverage is just as efficient no matter where it’s running geographically.

In his research, he posited that a single high-skilled job can create five lower-skilled jobs inside of the community where the high-skilled job is created.

You can see it at scale in Germany with the Mittelstand, which typifies this model of combining the high-skill, highly trained labor and augmenting them with really sophisticated technology, whether it’s a manufacturing business or a services business, or whatever.

Then you have this basic stuff that’s just shameful that it isn’t already solved, like access to broadband, or the vocational education required to ensure people can use these tools effectively to do the work of the future.

I would argue that there is a primary reason that these ingenious people decided when they were graduate students at Stanford and Carnegie Mellon to focus on solving that problem of how do you get a vehicle to be able to drive itself.

We ran the Apollo program in the in the ’60s not because there was anything especially necessary about putting a human being on the moon, but because solving that problem was a great way to focus human ingenuity at a massive scale on a set of technologies that turned out to be very beneficial.

The process of solving the problem could put into place this infrastructure that could also define entire new sectors of the industry and our economic outputs for decades ahead.

Kevin Scott: One of the ways that I think about it is that, as we invented software engineering as a discipline over the course of the past 60 years or so, we realized that finding all the bugs in software is hard.

Anybody building a machine learning model that’s going to be used in a product that the company has a set of guidelines that define what is or what isn’t a potential sensitive use of that technology.

If it is a sensitive use, then it has to go through a very rigorous review process to make sure that we are using the technology in ways that are making fair and unbiased decisions, that the data that we’re using to train the model and that the data that we’re collecting as a byproduct of the use of the model is treated in a proper way, preserving all of our covenants that we have with all of our stakeholders, having the degree of transparency and control that you need in the most sensitive situations.

Bias in data is something we’ve talked a lot about as a community over the past handful of years, and we now have tools that can detect when a data set has fundamental biases in it.

We’re using GANs (generative adversarial networks), which are another type of neural network, to generate synthetic data to compensate for those biases, so that we can train on unbiased data sets even though we may not have representative data that is naturally occurring in the data set to help us train the model in the way that we want.

It’s a whole spectrum of things, and I think the dialogue that we’ve got right now between stakeholders, people building these models, and people who are analyzing them and sort of pushing us and advocating for accountability—all of that’s good.

That it’s going to take concerted effort by the engineers, the scientists, the ethicists, a whole range of people thinking together about how to make sure these systems are safe.

I think we might get to the point soon where you can do all of the stuff that I’m doing for three bucks or a dollar’s worth of electronics, which then makes it a very feasible way to build a user interface for something.