AI News, On Think Tanks artificial intelligence
Georgetown to launch AI think tank
The announcement follows an executive order that President Donald Trump signed in February to increase AI research and development.
SUNY, meanwhile, will also receive $30 million in cash and in-kind contributions from IBM to fund system-wide AI research and a $300 million grant from Empire State Development to make capital investments in the new center.
The system hopes the new tool will make it easier for faculty to source documents from a central location. More broadly, it shows how colleges are using AI to benefit their operations.
Penn State, for example, has developed AI prototypes to help faculty develop courses, assemble course packs from open-source materials and automate assessments.
The university's senior director for Teaching and Learning with Technology, Jennifer Sparrow, said at Educause this past fall that the features are meant to be a starting point for faculty, rather than a plug-and-play option to get a finished product.
The Military Wants to Build Deadly AI-Controlled Tanks
The U.S. Army just called on experts in the field to help it develop technology that would allow a ground combat vehicle like a tank to automatically detect, target, and engage enemy combatants.
The Advanced Targeting and Lethality Automated System (ATLAS) would theoretically give a tank the ability to do everything necessary to take down a target except pull the trigger — a human operator will still need to actually fire, according to Quartz.
In the age of AI, think tanks must evolve
I am not particularly interested in the second U.S.-North Korea summit meeting taking place this week in Vietnam because it’s easy to predict that the possible consequences will not be very significant, if not meaningless.
The following is a summary of what Hisano and Tatsumi stated in the forum: CIGS has its own research unit on AI/big data, which is unique because it consists of scholars with expertise in mathematics, system engineering, data science or physics.
In addition, CIGS is working hard to maximize its synergy effects by currently making combined research efforts on the following subjects: Analyzing U.S.-China bilateral trade issues by utilizing customs data to assess the impact of escalating trade tensions between the two nations.
Utilizing data of real estate prices to analyze the risk of real estate bubbles: CIGS’s case study on Japan’s bubble economy may be applicable to the analyses on real estate prices in China.
Utilizing GPS and other positioning data from various devices (i.e., smartphones) to analyze the movement of people, which may provide valuable information for the solution to ensure “smart”
Analyses of SNS activities to assess political divides and activities by various active political groups motivated by ideologies, religion, etc.: This can be a useful tool to identify and locate potential extremist groups.
If it is the unfortunate reality we must face, it is imperative for all foreign policy and national security analysts to have at least a minimum basic knowledge about the mechanism and application of AI/big data technologies when they discuss foreign policy.
Calling for global consensus on data governance, Abe stated at the World Economic Forum on Jan. 23 that he “would like Osaka G20 to be long remembered as the summit that started worldwide data governance.”
All in all, AI-related issues are so important and serious that think tanks must further develop international coordination and work together to make the best use of AI while not allowing the abuse of such technologies.
Online Symposium: Chinese Thinking on AI Security in Comparative Context
Geo-Technology Practice Head, Eurasia Group, and China Digital Economy Fellow, New America Research Associate and Policy Officer, Leverhulme Centre for the Future of Intelligence, University of Cambridge The CAICT document on AI and security/safety represents a broad and well thought out effort by a key Chinese government technology think tank to assess the key issues and propose some steps for going forward.
Since China released the national New Generation AI Development Plan (AIDP) in July 2017, issues around safety/security and ethics have begun to gain traction within China’s research establishment around AI, and within the broader private sector, which is very much driving China’s AI sector.
That paper was a collaborative effort among a number of government think tanks involved with AI research and policy, and leading private sector companies, both big players and smaller companies part of a dynamic group of AI startups, including iFlytek, Huawei, Alibaba, Tencent, and Sensetime.
In addition, in late 2017 and early 2018, the Chinese government announced the formation of two key advisory groups: the New Generation AI Strategic Advisory Committee (for full list see here), and a China AI standardization general group and consulting team.
The standards general group includes a much broader array of companies, while the consulting team is derived primarily from academic, government think talk, and even defense industry representatives.
Furthermore, the European Union is trying to harness public opinion and bring a more diverse set of voices to this discussion via the AI Alliance, a platform where all of society can provide feedback on the progress of the AI HLEG and raise potential concerns.
One of the major differences in the Chinese context is the lack of true NGOs to participate in broader discussions around these issues, and the realization in government sectoral standards and technical organizations that AI applications already or will increasingly play a role in their sectors.
Adjunct Senior Fellow, Center for a New American Security Beyond their enthusiasm about the positive potential of AI, Chinese technical leaders and policymakers are also starting to engage with concerns over the safety and security implications of rapid advances in AI technologies, which are recognized as a “double-edged sword.” As this white paper describes, rigorous and sophisticated consideration of a range of risks and issues that might arise with advances in AI is underway at CAICT, which has emerged as a key player on these issues.
However, I’d highlight in particular certain elements of this discussion and framework that reveal the extent to which there can be an ideological dimension to the Chinese government’s approach to these issues, raising concerns about the impact of China’s aspirations for leadership in AI for the future of these technologies.
In particular, the white paper identified risks to state/national security from AI that include not only military concerns but also the security of China’s political system, including “hidden dangers” of the impact of AI on public opinion.
Going forward, it will be interesting to see whether the concerns over the risks posed by AI to national security will extend beyond such technical discussions to shape the Chinese military’s approach to its own research and development of AI applications.
This white paper raises concerns that AI “can be used to build new-type military strike forces, directly threatening national security,” and it points out: “the applications of intelligent weapons will cause: control to become remote, increased precision of strikes, miniaturization of conflict domains, and process intelligentization.” However, the discussion of trends toward “a new round of arms race” highlights U.S. and Russian efforts without acknowledging the PLA’s own extensive investments and developments in the advancement of military intelligentization, which Xi Jinping personally has urged the PLA to advance.
As concerns of AI ethics and safety emerge as a core element of the U.S. Department of Defense’s own AI Strategy, perhaps China will consider providing greater transparency on the extent to which these concerns influence its own approach to AI for national defense, beyond this initial consideration of such issues by CAICT.
The international community should expect the release of additional draft policies that detail specific content guidelines for the assessment and detection of security risks that AI should be able to detect, particularly in public-facing AI tools.
For example, the inclusion of societal security risks could help expose systemic social threats from AI that do not always receive sufficient attention from national authorities, but which could be massively destabilizing to nations and regions of the world.
This framework aligns with that view by showing a strong focus on the identification and remediation of technical risks posed by AI, as well as ensuring the outcomes of AI can be understood, managed, and controlled—but it doesn’t go near questions of when and which AI use cases are appropriate from an individual rights perspective.
This potentially dangerous rhetoric (presumably based in large part on the view that China’s government engages in pervasive surveillance) is often used to conclude that efforts to build safe or ethical AI are futile, and that democratic countries only hinder their AI development by paying attention to algorithmic vulnerability or algorithmic bias along the way.
- On Monday, June 1, 2020
How Artificial Intelligence & Think Tanks Are Targeting Muslims (Dajjal's System) (Shocking!!!)
How Artificial Intelligence & Think Tanks Are Targeting Muslims (Dajjal's System) (Shocking!!!) Assalam alakum, Please leave your thoughts, ideas & research ...
Artificial Intelligence Laptop Cases - Think Tank Photo
Smartly designed cases and accessories to meet your laptop carrying needs. KEY FEATURES: - Slim and unique design allows for storage without bulk ...
The Future of Artificial Intelligence (AI) in the Enterprise (Full Discussion) | Adobe Think Tank
Artificial intelligence (AI) and machine learning represent a major technological shift that is spurring a 21st century renaissance around customer experiences, ...
2017 Healthcare Think Tank Session 3: Big Data and Artificial Intelligence
Learn More : At 2017 DellEMC Healthcare Think Tank Session-3, a group of healthcare experts discuss and debate ..
Adobe Think Tank: Using Artificial Intelligence (AI) to Develop Learning in Organizations
Nara Logics CEO Jana Eggers discusses the importance of using AI to transform your business into a learning organization that's agile and responsive.
Live Better & Longer with Artificial Intelligence
Initiated at the occasion of the 2018 Year of Innovation (following Joint Declaration on Innovation on the occasion of the State Visit by French President François ...
Adobe Think Tank: The Impact of Artificial Intelligence (AI) On Data Analytics
Daniel Raskin, CMO of Kinetica, stops by to talk AI & how we can improve the human & machine relationship to gain better insights from data. Subscribe: ...
The Artificial Intelligence Race and the New World Order
Council on Foreign Relations - The Malcolm and Carolyn Wiener Lecture on Science and Technology A discussion of advances in artificial intelligence, the ...
Adobe Think Tank: Building an AI-Centered Business
At the latest Adobe Think Tank, we spoke to Robbie Allen of Infinia ML about industry trends and challenges that are arising as businesses integrate AI and ...
Artificial Intelligence - The War on Consciousness with Dr. Graham Downing
CRITICAL INFORMATION Artificial Intelligence : Humanity's greatest achievement or its greatest threat? Dr Graham Downing presents the 'Cutting-Edge' ...