AI News, Artificial Intelligence artificial intelligence
Artificial Intelligence: Implications for Business Strategy
Malone is a Professor of Information Technology and of Organizational Studies at the MIT Sloan School of Management, and his research focuses on how new organizations can be designed to take advantage of the possibilities provided by information technology.
His newest book, Superminds, appeared in May 2018.He holds 11 patents, cofounded three software companies, and is quoted in numerous publications such as Fortune, The New York Times, and Wired.
Garry Kasparov: Chess, Deep Blue, AI, and Putin | Artificial Intelligence (AI) Podcast
This conversation is part of the Artificial Intelligence podcast.INFO:Podcast website:https://lexfridman.com/aiiTunes:https://apple.co/2lwqZIrSpotify:https://spoti.fi/2nEwCF8RSS:https://lexfridman.com/category/ai/feed/Full episodes playlist:https://www.youtube.com/playlist?list...Clips playlist:https://www.youtube.com/playlist?list...EPISODE LINKS:Garry's Twitter: https://twitter.com/Kasparov63Garry's Site: http://www.kasparov.comGarry's Books:Deep Thinking: https://amzn.to/2orHAyAWinter is Coming: https://amzn.to/32QZ0nhHow Life Imitates Chess: https://amzn.to/36ap8vjOUTLINE:0:00 - Introduction1:33 - Love of winning and hatred of losing4:54 - Psychological elements9:03 - Favorite games16:48 - Magnus Carlsen23:06 - IBM Deep Blue37:39 - Morality38:59 - Autonomous vehicles42:03 - Fall of the Soviet Union45:50 - Putin52:25 - LifeCONNECT:- Subscribe to this YouTube channel- Twitter: https://twitter.com/lexfridman- LinkedIn: https://www.linkedin.com/in/lexfridman- Facebook: https://www.facebook.com/lexfridman- Instagram: https://www.instagram.com/lexfridman- Medium: https://medium.com/@lexfridman- Support on Patreon: https://www.patreon.com/lexfridman
Artificial Intelligence Research Needs Responsible Publication Norms
After nearly a year of suspense and controversy, any day now the team of artificial intelligence (AI) researchers at OpenAI will release the full and final version of GPT-2, a language model that can “generate coherent paragraphs and perform rudimentary reading comprehension, machine translation, question answering, and summarization—all without task-specific training.” When OpenAI first unveiled the program in February, it was capable of impressive feats: Given a two-sentence prompt about unicorns living in the Andes Mountains, for example, the program produced a coherent nine-paragraph news article.
others suggested that the company had betrayed its core mission and should rename itself “ClosedAI.” In May, OpenAI released a larger, 345M version of the model and announced that it would share 762M and 1.5B versions with limited partners who were also working on developing countermeasures to malicious uses.
Similarly, AI startup AI21 Labs released a 345M version of their neural text generator, on the grounds that it was “equivalent in size to the publicly released versions of Grover and GPT-2.” “Curious hacker” Connor Leahy independently replicated OpenAI’s unreleased 1.5B GPT-2 and planned to publicly release it—but then decided against doing so precisely to help forge responsible release norms.
Deep fakes—computer-generated realistic video or audio—allow for new kinds of artistic expression, but they also can be used maliciously to create blackmail material, sway elections or falsely dispel concerns about a leader’s health (or the “well-being” of disappeared individuals).
Meanwhile, individual researchers have also advocated for calculating DREAD scores—which weigh the potential damage, attack reliability, ease of exploit, scope of affected users and ease of discovery—when designing machine learning systems and outlined questions to consider before publishing.
By participating in a limited information-sharing regime with trusted participants, OpenAI minimized one problem of closed publication—that it systematically advantages larger research institutions at the expense of smaller ones, risking consolidated control of technological advances.
More complete publication norms for AI would weigh and balance a broader range of factors, possibly including the following: One critical structural question is which entity should be weighing the potential risks of a technology against its potential benefits.
But open research can take many forms—including, as Nick Bostrom observed, “openness about science, source code, data, safety techniques, or about the capabilities, expectations, goals, plans, and governance structure of an AI project.” Without sharing everything, researchers can be transparent about what a system can do and their reasons for nondisclosure, so that others can weigh the commercial and security benefits of the technology against the credibility of the concerns.
The irreversibility of disclosure and the unknowability of potential harms suggest favoring nondisclosure, but adherence to a strong version of the precautionary principle may transform it into a paralyzing principle, chilling the development and spread of socially beneficial technologies.
Precisely because political and market incentives may place undue weight on the scale in favor of immediate, concrete or concentrated benefits over long-term, abstract or diffuse risks, we need to create shared ex ante principles—and, eventually, institutional structures to implement and further develop them.
Robert Heinlein has observed, “The answer to any question starting, ‘Why don’t they—’ is almost always, ‘Money.’” In thinking about how best to incentivize norm adoption, it is important to recall that regulations can shape technological development by creating carrots as well as sticks.
- On 15. april 2021
What Is Artificial Intelligence? | Artificial Intelligence (AI) In 10 Minutes | Edureka
Machine Learning Masters Program: ** This edureka video on Artificial ..
Artificial Intelligence Tutorial | AI Tutorial for Beginners | Artificial Intelligence | Simplilearn
This Artificial Intelligence tutorial video will help you understand what is Artificial Intelligence, types of Artificial Intelligence, ways of achieving Artificial ...
Artificial intelligence & algorithms: pros & cons | DW Documentary (AI documentary)
Developments in artificial intelligence (AI) are leading to fundamental changes in the way we live. Algorithms can already detect Parkinson's disease and cancer ...
Artificial Intelligence & the Future - Rise of AI (Elon Musk, Bill Gates, Sundar Pichai)|Simplilearn
Artificial Intelligence (AI) is currently the hottest buzzword in tech. Here is a video on the role of Artificial Intelligence and its scope in the future. We have put ...
Artificial Intelligence Full Course | Artificial Intelligence Tutorial for Beginners | Edureka
Machine Learning Engineer Masters Program: This Edureka video on "Artificial ..
Where AI is today and where it's going. | Richard Socher | TEDxSanFrancisco
Richard Socher is an adjunct professor at the Stanford Computer Science Department where he obtained his PhD working on deep learning with Chris Manning ...
Types Of Artificial Intelligence | Artificial Intelligence Explained | What is AI? | Edureka
Machine Learning Engineer Masters Program: ** This Edureka video on "Types Of ..
What is Artificial Intelligence Exactly?
Subscribe here: Check out the previous episode: Become a Patreo
Elon Musk's Last Warning About Artificial Intelligence
Elon Musk Biography: Elon Musk Merchandise: Elon Musk Merchandise Store:
Artificial Intelligence vs Machine Learning - Gary explains
Read more: andauth.co/AIvsML | The terms artificial intelligence and machine learning are often used interchangeably these days, but there are some important ...