AI News, What's New? artificial intelligence

Artificial Intelligence and Machine Learning in Software as a Medical Device

Artificial intelligence and machine learning technologies have the potential to transform health care by deriving new and important insights from the vast amount of data generated during the delivery of health care every day.

The FDA is considering a total product lifecycle-based regulatory framework for these technologies that would allow for modifications to be made from real-world learning and adaptation, while still ensuring that the safety and effectiveness of the software as a medical device is maintained.

real-world examples of artificial intelligence and machine learning technologies include: Adaptive artificial intelligence and machine learning technologies differ from other software as a medical device (SaMD) in that they have the potential to adapt and optimize device performance in real-time to continuously improve health care for patients.

The ideas described in the discussion paper leverage practices from our current premarket programs and rely on IMDRF’s risk categorization principles, the FDA’s benefit-risk framework, risk management principles described in the software modifications guidance, and the organization-based total product lifecycle approach (also envisioned in the Digital Health Software Precertification (Pre-Cert) Program).

This plan would include the types of anticipated modifications—referred to as the “Software as a Medical Device Pre-Specifications”—and the associated methodology being used to implement those changes in a controlled manner that manages risks to patients —referred to as the “Algorithm Change Protocol.” In this approach, the FDA would expect a commitment from manufacturers on transparency and real-world performance monitoring for artificial intelligence and machine learning-based software as a medical device, as well as periodic updates to the FDA on what changes were implemented as part of the approved pre-specifications and the algorithm change protocol.

Remarks by Secretary Esper at National Security Commission on Artificial Intelligence Public Conference

You know, the work this commission is doing in bringing together academia, defense and business is critically important.

I took part in what became the deepest air assault into enemy territory at that point in history.

The Gulf War was the proving ground for a new generation of military weapons and equipment, from laser-guided smart bombs to stealth aircraft, to the first widespread use of GPS.

By liberating Kuwait and defeating the Iraqi military in a matter of days, American forces demonstrated our mastery of the digital revolution, and rendered what was then cutting-edge Soviet technology obsolete.

Suddenly they noticed the hum of Russian UAVs [unmanned aerial vehicles] overhead, followed by cyberattacks against their command and control and communication systems immediately, after a flurry of Russian artillery rained down on then.

That is why our National Defense Strategy hinges on the ability of our forces to adapt to a security environment characterized by new threats from our strategic adversaries.

We are committed to making the investments necessary to accelerate our innovation in technologies that will help us stay ahead of the curve, especially artificial intelligence.

Future wars will be fought not just on the land and in the sea, as they have for thousands of years, or in the air, as they have for the past century, but also in outer space and cyberspace, in unprecedented ways.

President Xi has said that China must, quote, "ensure that our country marches in the front ranks when it comes to theoretical research and this important area of AI and occupies the high ground in critical and core AI technologies."

While the U.S. faces a mighty task in transitioning the world's most advanced military to new AI-enabled systems, China believes it can leapfrog our current technology and go straight to the next generation.

In addition to developing conventional systems, for example, Beijing is investing in low cost, long-range, autonomous and unmanned submarines, which it believes can be a cost-effective counter to American naval power.

As we speak, the Chinese government is already exporting some of the most advanced military aerial drones to the Middle East, as it prepares to export its next generation stealth UAVs when those come online.

In addition, Chinese weapons manufacturers are selling drones advertised as capable of full autonomy, including the ability to conduct lethal targeted strikes.

All signs point to the construction of a 21st century surveillance state designed to censor speech and deny basic human rights on an unprecedented scale.

Beijing has all the power and tools it needs to coerce Chinese industry and academia into supporting its government led efforts.

Equally troubling are the outside firms, or multinational corporations, that are inadvertently or tacitly providing the technology or research behind China's unethical use of AI.

If our allies and partners turn to Chinese 5G platforms, for example, it will inject serious risk into our communication and intelligence sharing capabilities.

Our collective security must not be diminished by a short and narrow sighted focus on economic opportunity.

Moscow has already demonstrated its eagerness to use the latest technologies against democratic nations and the ideals of free and open societies.

mentioned the Ukraine example earlier and we expect Russia to continue to deploy increasingly high tech AI capabilities in current and future combat zones.

The United States, on the other hand, will offer a vision of AI that upholds American values and protects our fundamental belief in liberty and human rights.

We believe there's tremendous opportunity to enhance a wide range of the department's capabilities, from the back office to the front line, and we will do this while being recognized as the world leader in military ethics by developing principles for using AI in a lawful and ethical manner.

Not only are we doing this in areas such as predictive maintenance and cyber defense, but also with more complex applications, like joint warfighting.

We also see it as a tool to free up valuable resources and manpower so our war fighters and our operators can focus on higher priority tasks in a more efficient and more effective manner.

This will require the wholesale commitment to modernizing our warfighting systems, cultivating a premiere workforce, and strengthening our partnership across the entire sector.

The ongoing continuing resolution harms military readiness and impacts our ability to accelerate AI development at the speed and scale necessary to stay ahead.

The department's history clearly demonstrates our ability to invest in, develop and deploy systems that reduce risk to our war fighters while increasing our combat effectiveness for the ultimate purpose of protecting the security of the American people.

We will ensure that we develop this technology in ways that uphold our values and advance security, peace and stability at the same time.

The real question is whether we let authoritarian governments dominate AI, and by extension the battlefield, or whether industry, the United States military and our partners can work together to lead the world in responsible AI research and application.

We need the full force of American intellect and ingenuity, working in harmony across the public and private sectors.

We need your leadership and your vision to ensure we maintain a strategic edge, and we need forums and commissions such as these to pioneer solutions that will deter aggression and provide for our collective security.

we're reaching out in a number of different ways, everything from the traditional way of posting notices and RFPs [request for proposals] and things like that, to forums, to for a, to think tank sessions, to reaching out to academics directly, if you will.

we've got to tap the best and brightest from across the country, again from all of those different sectors, and make sure we can –

and I understand you just mentioned the think tanks, that you've had the Defense Innovative (Innovation) Board just come in and provide some recommended principles for the ethics, because it's not just speed, it's conforming to our values as a society.

and one of the things that I'm very concerned about, and the commission has had a lot of dialogue about, which is our human resources, and how are we going to attract that talent and institutionalize it into the department.

Have you had a chance to think about those concerns, and what you might be able to do to attract the right type of talent to be able to do this business in the future?

You've got to be able to recruit them and retain them and keep them happy and busy and, you know, we've faced the same challenge over the past many years with cyber.

So what we have to do is make sure that we find different ways to attract them because we cannot compete with the private sector when it comes to compensation, but we can offer you the chance to serve your country, to do things that are very, very interesting, maybe do things that aren't legal in the private sector –

you work around a great group of people who are focused on something bigger than themselves, bigger than the bottom line, and you get committed to that.

to make sure we can bring in people, we can recruit them, we can use different techniques to bring them in mid-career, we can bring them in with a different compensation packages and whatnot.

So it's not just warfighting, but it's going to be predictive maintenance, which is one of the areas in the Army, at least, that we're trying to get AI involved immediately.

you get higher reliability rates, you get fewer breakdowns, you get better efficiency on the system, et cetera.

I was in the infantry, but I know a little bit about it, but you know, if you're a tank platoon leader or a tank commander, if you will, you're –

But imagine a world, though, where you have AI integrating into all of your sensors and everything, where the AI is constantly scanning a horizon and it's immediately, within milliseconds, it's –

what's a civilian truck and what's an enemy combatant vehicle, what's a tank and what's a fighting vehicle, which one has its turret pointed to you and which one doesn't, which one is your immediate threat?

What are your thoughts for the future, and how do you think your leadership will be able to top down, as well as bottom up draw our department past that bureaucracy that you're discussing into that future?

There's a pipeline, as you know, that you went and experienced, not just in the Army, but now in the broader context of DOD, where science and technology, early research and that investment is so important.

And you have an opportunity here with this community, it's a mix of academic, it's a mix of civilian, et cetera.

There are serious issues out there, and we've been asleep at the switch now for quite some time/ And we're finally waking up here in the past couple of years.

The Real Value of Artificial Intelligence in Nuclear Command and Control

The authors, Adam Lowther and Curtis McGiffin, argued that, in order to modernize and keep its Cold War-era nuclear command, control, and communications (NC3) system credible against increasingly sophisticated adversaries, America should develop and deploy an “automated strategic response system based on artificial intelligence.” They explained that such a move was necessary since “Russian and Chinese nuclear modernization is rapidly compressing the time U.S. leaders will have to detect a nuclear launch, decide on a course of action, and direct a response.”

That does not demand a “dead hand” solution as a response — an automated system that processes indications and warning and is authorized to make launch decisions with humans outside the loop — but it does mean that a response is necessary.

When asked about AI’s effect on NC3 and nuclear modernization, Hyten said, “I think AI can play an important part.” Hyten’s designated lead for these matters, the director of the recently established USSTRATCOM NC3 Enterprise Center, Elizabeth Durnham Ruiz, has explicitly and publicly stated the need for recruitment and retention of AI experts for the NC3 modernization effort: We need to be innovative in our approaches while accessing the talent needed to enhance our current workforce and go fast while we partner with academia and industry to establish the pipelines to build a talent workforce for the long term.

In addition, the Defense Science Board Task Force on National Leadership Command Capability effort to investigate a variety of areas where novel AI will have an impact on national security demonstrates that the question of integrating artificial intelligence into NC3 is not merely conjecture, but rather an established and ongoing set of deliberations.

In Russia, the result was the Perimetr project, “an automatic system of signal rockets used to beam radio messages to launch nuclear missiles if other means of communication were knocked out.” The dead hand system was not as ominous as it sounds.

The dead hand system, most likely utilized within the Perimetr project, simply used a machine to combine Soviet command and control changes and present these changes to the human that was ultimately in charge of the nuclear button.

Our recent research and collaborative efforts have asked the question of how novel artificial intelligence techniques — namely deep learning — may be integrated into the vast NC3 enterprise and potentially address some of these risks.

These statistical methods (techniques of machine learning that are growing in popularity due to their broad applications) in fact do not appear dead at all, but rather constantly learn from themselves through the deep layers of neural nets.

Upon closer investigation, those systems — if they are to take advantage of deep learning approach — would reside well short of the creation of a Skynet environment in which humans no longer make nuclear launch decisions.

We are convinced that time is most usefully spent debating the technical positives and negatives of such integration in a manner that does not simply classify perspectives on the discussion as “that’s crazy” or “just don’t,” or as vaguely as stating that there is a need for an “automated strategic response system based on artificial intelligence.” Anyone who knows about AI and these matters should weigh in with their insights regarding AI safety and security to shape the world that is coming.

Despite the opening of discussions between policy and AI experts, some convened by Tech4GS — a Bay Area think tank and accelerator that both of us represent — considerably more work needs to be done to bring thought leaders from these communities into a deeper analytical discussion.

As the ways in which adversaries engage in warfare shift, there is technology available that could potentially bring game-changing advances a wide range of NC3 related areas — to include obvious ones like data analytics and decision-support systems.

What is the potential for accrued risk within a large stack of subsystems — (i.e., what is the compounded error rate and accordant risk, that is created when of an increasing number of subsystems relying on vulnerable deep learning techniques to provide information up the stack)?

This includes, but is not limited to: identifications of false positives/negatives, various types of data and processing and analysis efficiencies and thus increased productivity, drawing previously inconceivable connections, enhancing decision aids, and identifying anomalous activity, among others.

However, great care should be taken in considering the degree to which deep learning is integrated into future NC3 systems, including where exactly within the broad enterprise novel AI techniques induce the least amount of critical — or unacceptable — risk.

How artificial intelligence will change your world in 2019, for better or worse

From a science fiction dream to a critical part of our everyday lives, artificial intelligence is everywhere. You probably don't see AI at work, and that's by design.

Artificial Intelligence & the Future - Rise of AI (Elon Musk, Bill Gates, Sundar Pichai)|Simplilearn

Artificial Intelligence (AI) is currently the hottest buzzword in tech. Here is a video on the role of Artificial Intelligence and its scope in the future. We have put ...

What is Artificial Intelligence Exactly?

Subscribe here: Check out the previous episode: Become a Patreo

Artificial intelligence: What the tech can do today

Is the artificial intelligence we see in science fiction movies at all realistic? Many tech industry experts believe the idea of a superintelligent or sentient AI is ...

✪ TOP 5: NEW Artificial Intelligence Technology You NEED To See (AI Gadgets 2017)

Check out our latest picks of Top 5 Awesome New Artificial Intelligence Technology and Amazing AI Gadgets. Let us know in the comments which of these new ...

What is Artificial Intelligence (or Machine Learning)?

Want to stay current on emerging tech? Check out our free guide today: What is AI? What is machine learning and how does it work? You've ..

Artificial Intelligence In 5 Minutes | What Is Artificial Intelligence? | AI Explained | Simplilearn

Don't forget to take the quiz at 04:10! Comment below what you think is the right answer, to be one of the 3 lucky winners who can win Amazon vouchers worth ...

Technology: AI in China

Companies and governments are turning to artificial intelligence to make streets safer, shopping more targeted and health care more accurate. China is one of ...

CES 2019: AI robot Sophia goes deep at Q&A

CES2019 AI GOES DEEP Things get strange - sometimes existential - when Hanson Robotics AI Sophia fields questions from the audience, a religious ...

What Is Artificial Intelligence? | Artificial Intelligence (AI) In 10 Minutes | Edureka

Machine Learning Masters Program: ** This edureka video on Artificial ..