AI News, Is China at an advantage in the AI race because of GDPR? artificial intelligence
born 1976) is a Chinese-American computer scientist known as one of the most prolific researchers in machine learning and AI, with his work helping incite the recent revolution in deep learning.
Also a business executive and investor in the Silicon Valley, Ng co-founded and led Google Brain and was a former Vice President and Chief Scientist at Baidu, building the company's Artificial Intelligence Group into a team of several thousand people.
Since 2018 he launched and currently heads AI Fund, initially a $175-million investment fund for backing artificial intelligence startups.
In 1997, he earned his undergraduate degree with a triple major in computer science, statistics, and economics at the top of his class from Carnegie Mellon University in Pittsburgh, Pennsylvania.
At MIT he built the first publicly available, automatically-indexed web-search engine for research papers on the web (it was a precursor to CiteSeer/ResearchIndex, but specialized in machine learning).
He became Director of the Stanford Artificial Intelligence Lab, where he taught students and undertook research related to data mining, big data, and machine learning.
His machine learning course CS229 at Stanford is one of the most popular courses offered on campus with over 1000 students enrolling some years.
Since joining Stanford in 2002, he has advised dozens of Ph.D and M.Sc students, including Ian Goodfellow, Quoc Le and many other students who have gone on to work for academia or companies like Google, Facebook, Apple, Twitter, and 23andme.
The rationale was that an efficient computation infrastructure could speed up statistical model training by orders of magnitude, ameliorating some of the scaling issues associated with big data.
Ng researches primarily in machine learning, deep learning, machine perception, computer vision, and natural language processing;
In 2011, Ng founded the Google Brain project at Google, which developed large scale artificial neural networks using Google's distributed computer infrastructure.
Among its notable results was a neural network trained using deep learning algorithms on 16,000 CPU cores, which learned to recognize cats after watching only YouTube videos, and without ever having been told what a 'cat' is.
Within Stanford, they include Daphne Koller with her 'blended learning experiences' and co-designing a peer-grading system, John Mitchell (Courseware, a Learning Management System), Dan Boneh (using machine learning to sync videos, later teaching cryptography on Coursera), Bernd Girod (ClassX), and others.
It offered a similar experience to MIT's Open Courseware except it aimed at providing a more 'complete course' experience, equipped with lectures, course materials, problems and solutions, etc.
Widom, Ng, and others were ardent advocates of Khan-styled tablet recordings, and between 2009–2011, several hundred hours of lecture videos recorded by Stanford instructors were recorded and uploaded.
The course featured quizzes and graded programming assignments and became one of the first and most successful Massive open online courses (MOOCs) created by a Stanford professor.
One of the students (Frank Chen) claims another one (Jiquan Ngiam) frequently stranded him in the Stanford building and refused to give him a ride back to his dorm until very late at night, so that he no choice but to stick around and keep working.
This is a non-technical course designed to help people understand AI's impact on society and its benefits and costs for companies, as well as how they can navigate through this technological revolution.
Ng has filed patents in sundry things from text-to-speech (TTS) systems, compressed video and audio recordings, rechargable batteries, electronic roll towel dispensers, an energy saving cooker, and a one-size-fits-all T-shirt.
Ng is one of the scientists credited with bringing humanity to AI, and he sees AI as a technology that will improve the lives of people, not an anathema that will 'enslave' the human race.
In 2017, Ng said he supported basic income to provide people tools to learn about AI and spend time studying so that they can re-enter the workforce as productive members.
Artificial Intelligence: Can Humans Drive Ethical AI?
Artificial intelligence (AI) is a powerful technology that’s driving innovation, boosting performance, and improving decision-making and risk management across enterprises.
Businesses using advanced AI today are already seeing value in the form of reduced costs, accelerated time-to-market, increased customer retention and improved employee engagement.
As the system is exposed to more and more data, it gets better at learning and is able to optimise the algorithm to achieve better performance, which can lead to new insights and better decision rules.
Because an AI system is capable of a processing speed and capacity far beyond those of humans, it can learn and begin to develop its own decisions rules quickly, often moving in an unanticipated direction.
If loan officers have historically made biased decisions in rejecting individuals belonging to a certain race, gender or age group, for example, the development data for the AI will reflect these biases.
The scarcity of comprehensive, unbiased or current data sets for training algorithms contributes to the data bias problem — and if everyone uses the same training sets, the bias perpetuates throughout AI applications.
Protiviti’s AI survey results note that there are four key areas that need to be addressed to ensure the validity of an AI system’s performance: AI has the potential to deliver efficiencies, decrease errors and create wealth quickly.
As companies begin to implement AI and machine learning to automate repetitive tasks in the workplace, employees whose jobs are replaced by technology-enabled automation may fear becoming unemployed.
Because stewardship of AI must always remain the work of human beings, the workforce capacity liberated by AI could be redirected to this richer set of responsibilities, including analysing AI outputs and monitoring AI systems as they learn and progress.
An organisation’s AI strategy should be set with the active engagement of the CEO and board of directors, who should be able to articulate clear goals with respect to the AI program, as well as clear ethical standards that should guide it.
The goals set by senior leaders must ensure AI implementations are aligned to desired business outcomes and include human agency and oversight, privacy and data governance, transparency, fairness, sustainability and accountability.
To keep AI applications aligned to business outcomes, AI initiatives should have executive buy-in and be led by line-of-business leaders. The organisational structure must ensure and facilitate close collaboration between AI experts and business partners.
Lucas Lau, Protiviti’s director of machine learning/deep learning, observes that the three skill sets critical to AI — business knowledge, data science and data engineering — are rarely found in one individual, but that both AI teams and leadership need access to these skills.
Lefferts adds, “Starting with clear goals, it’s up to humans to ensure that AI systems are designed with the right algorithms, fed the right data and monitored properly to keep them aligned with the best goals and values of humanity.” Protiviti’s interdisciplinary teams help solve our clients’ unique business challenges using data and analytics and leveraging technologies such as AI and machine learning.
Our professionals bring deep industry expertise and extensive technology and consulting experience to implement technical solutions and change programs that enable clients to create a competitive advantage and capitalise on financial benefits from adopting AI/ML.
Harnessing artificial intelligence
Artificial intelligence (AI) is changing the economy: it is impacting on the way we shop, on the way we communicate, on the way we do research.
US investment bank Goldman Sachs argues that AI: “is a needle-moving technology for the global economy […] impacting every corporation, industry, and segment of the economy in time”. AI is an enabler that some have likened to the invention of the combustion engine or electricity –
AI generally refers to efforts to build computers able to perform actions that would otherwise require human intelligence, such as reasoning and decision-making.
Recently, though, computers have improved in performance and more data have become available: in fact, a 2017 report estimated that 90 percent of the world’s data had been created within the preceding five years.
However, the bank also warns that: “Management teams that fail to invest in and leverage these technologies risk being passed by competitors that benefit from the strategic intelligence, productivity gains, and capital efficiencies they create.” Given that companies are warning of the risk of being overtaken by competitors that adopt AI, states should take a hard look at whether they do enough with regard to AI applications to guarantee their economies’
This competition can even touch on matters important to the culture and history of each country, such as when a reported 280 million people in China watched a machine owned by Google parent company Alphabet win at Go against one of the world’s best human Go players.
Kennedy’s landmark speech calling for America to land a man on the moon”. If AI is indeed like the combustion engine or electricity in its transformative potential, failing to adopt this technology will have both economic repercussions and could lead to massive geopolitical gaps between countries.
takes a clearly geopolitical approach, and emphasises that: “Continued American leadership in AI is of paramount importance to maintaining the economic and national security of the United States and to shaping the global evolution of AI in a manner consistent with our Nation’s values, policies, and priorities”.
This is worrying, given the distinct risk that states around the world may adopt techno-nationalist agendas, including increased protectionism to support national champions.
In a noteworthy essay, Ian Hogarth, a machine learning engineer and AI investor, warns that: “machine learning will be such a dramatic cause of instability that nation states will be forced to put their citizens ahead of broader goals around internationalism”.
In light of this, it is crucial that the EU, its member states, and European countries outside the EU more broadly avoid falling behind in AI research and use, and that they remain aware of the impact AI may have on their economies and societies.
Ian Hogarth puts this nicely: “There are perhaps 700 people in the world who can contribute to the leading edge of AI research, perhaps 70,000 who can understand their work and participate actively in commercialising it and 7 billion people who will be impacted by it.”
The scarcity of AI researchers has made them a precious commodity, with Microsoft Research chief Peter Lee comparing the cost of hiring a leading AI researcher to hiring a National Football League quarterback. This scarcity has even led to the practice of “acquihires”, whereby larger firms take over smaller firms with the primary aim of hiring their employees. 2.
As an example, Tesla’s fleet of vehicles has accumulated more than 1.2 billion miles of driving data, and in 2011 alone US Air Force drones amassed about 37 years’
This is leading to increasing interest in and development of GPUs, which are a more specialised electronic circuit fast emerging as the pillar of AI. Cloud companies (such as Google, Microsoft, Tencent, and others, which are primarily American and Chinese) are investing in such hardware.
The value of the AI-related hardware market (computing, memory, storage) is predicted to reach over $100bn by 2025, with US and Chinese first-movers capturing most of it. How do the two leading AI markets –
Goldman Sachs believes that: “talent of the highest calibre has and will continue to drive the innovative nature of the industry in China”. While the Chinese BAT companies (Baidu, Alibaba, and Tencent) underspend Google and Microsoft slightly on Research and Development (R&D), they have higher percentages of R&D employees. China’s internet users are more numerous than those of any other country.
Day shopping festival in 2016, Alibaba recorded 175,000 transactions per second. In addition, Chinese data privacy and data collection rules are lax, and Chinese users tend not to be as concerned about data privacy as the inhabitants of many Western countries are.
Article 7 of China’s National Intelligence Law gives the government legal authority to compel such assistance, though the government also has powerful non-coercive tools to incentivize cooperation.”US companies, on the other hand, are much less national –
comparatively small size and their strong data security rules mean that, in comparison to their colleagues elsewhere, European AI researchers and developers have relatively limited access to data pools.
There are, however, areas in which European companies show strength, such as in natural language processing, where almost half of the 12 key companies are European. The Economist has observed that: “Germany has as many international patents for autonomous vehicles as America and China combined”. With DeepMind based in London, Europe does have one global champion in AI –
It is also relevant that European populations tend to see AI, as with technological advances more broadly, not as an opportunity but as a threat: survey after survey has found higher levels of scepticism, if not outright rejection, of AI in Europe than the US and, even more so, China.
Key findings of this study include: “73 per cent of people in China [believe that] the future impact of digital technology will be positive overall, as well as in terms of its ability to create jobs and address societal challenges.”
Although multi-country surveys rarely capture cultural nuances and should be used with caution, the results nevertheless point to generally higher levels of scepticism in Europe, and the impact of scandals such as that surrounding Cambridge Analytica.
 In 2016, venture capital investment in the EU totalled about €6.5 billion, while the comparable US figure was €39.4 billion. And, as noted above, the EU’s regulatory framework and free-market policies forbid a Chinese-style government approach to sheltering and nurturing its tech industry. For Europe, the risks associated with missing the boat on AI are potentially enormous.
France’s AI strategy is already heading in the right direction on this when it argues that: “The public authorities must introduce new ways of producing, sharing and governing data by making data a common good”.
It plans to achieve this by opening up data gathered as part of government and publicly funded projects, and by incentivising private players to make their data public and transparent.
Europe’s AI industry has made clear its concerns about falling further behind its international competitors: more than 2,000 experts from CLAIRE (the Confederation of Laboratories for Artificial Intelligence Research in Europe) recently called for large-scale funding from the EU to counter China’s and America’s rapid progress.
$25 billion more in a time period that was ten times shorter. Projects that explicitly aim to fund “moon-shot projects”, such as the Franco-German JEDI (Joint European Disruptive Initiative), are therefore a step in the right direction.
In the digital realm, it already has a headstart: “Europe seems to be in the lead when it comes to setting standards for regulation and privacy protection in the digital age”, comments Deutsche Bank, specifically citing the General Data Protection Regulation as evidence of this strength. Emmanuel Macron has been outspoken on this front too, declaring that: “My goal is to recreate a European sovereignty in AI …
focus on data privacy, as Kai-Fu Lee notes, “will cause the American giants some amount of trouble and may give local European entrepreneurs the chance to build something that is more consumer and individual-centric …
In this respect, Europe should also look for other likeminded partners among its liberal democratic allies, such as Canada or Australia, to further increase the area in which such rules are applied and thereby increase their impact.
Indeed, there may even be opportunities for European countries that they have not yet acknowledged: the new competitive landscape could, in fact, benefit middle powers, as they will have greater capacity to compete than they did in the creation of the complex –
Political scientist Michael Horowitz argues: “As long as the standard for air warfare is a fifth-generation fighter jet, and as long as aircraft carriers remain critical to projecting naval power, there will be a relatively small number of countries able to manufacture cutting-edge weapons platforms.
Horowitz even goes as far as to say that it is “possible, though unlikely, that AI will propel emerging powers and smaller countries to the forefront of defense innovation while leaving old superpowers behind”.
Beyond lethal autonomous systems, whose possible development and use have become a hotly debated issue and given rise to public protests (for good reason), there are many AI applications in the military realm that are attractive for armed forces, as they can help to lower costs, reduce need for human operatives, and improve planning and foresight.
a fact that became known to the wider public in June 2018, when, following protests from its employees, Google ended ‘Project Maven’, a joint initiative with the US Department of Defense that aimed to use AI to analyse data collected by drones.
An educated and informed population may also be more resistant to handing over too much of its data to US (or Chinese) firms and insist on better privacy laws, thereby strengthening Europe’s regulatory power.
Indeed, it is in this element that Europe has a chance to go beyond mere sovereignty to become a norm-setter, embedding its ethics and values into AI governance and development, and serving as an example to fight back against AI nationalism.
In doing so, it will need to take significant steps itself, such as rapidly educating its own citizens and policymakers, as well as substantially increasing investment in AI and carefully choosing which subfields of AI to fund.
it is liable to find itself surrounded by more powerful rivals that have set the ground rules for AI, leaving it unable to compete or to provide citizens with the protection that they expect and deserve.
GDPR — How does it impact AI?
The vast scope of GDPR has raised fresh challenges — chief among them is the complex interaction between AI and the GDPR.
In the UK, the government has championed the flourishing AI sector, underscoring the country’s position as a true leader in emerging technologies, and is working towards making the UK a global centre for data-driven innovation.
This is firmly on the agenda for key sector players who are leading by example — for instance, a new code of conduct for the use of AI in the NHS was recently launched to ensure that only the safest and best systems are used.
Aiming to instil responsible practices, Article 22 prescribes that AI — including profiling — cannot be used as the sole decision-maker in choices that can have legal or similarly significant impacts on individuals’ rights, freedoms and interests.
There are exceptions to the rule in scenarios where the decision is necessary for entering into a contract, when union or member state law authorises such decisions — for example, to detect tax fraud — or when the data subject gives his or her explicit consent.
Beyond this, organisations face a process of trial and error in terms of applying this to their own systems, with the added pressure of even the smallest mistake potentially causing very damaging consequences.
The grey areas of data protection If one were to play devil’s advocate, automated decision-making is often justified, such as in cases where an AI tool rejects a job application if the applicant has not provided sufficient information.
Profiling, as part of AI decision-making, could result in repercussions when collecting and processing sensitive data such as race, age, health information, religious or political beliefs, shopping behaviour and income.
Another effective solution might be for companies to simply negate the requirement of Article 22, by using AI programming so that it flows back one step to allow concerned individuals to collect relevant inputs and make a final decision.
Regulation will not stem the advance and potential of next-generation technology as long as people and businesses are well prepared and focus on the underlying principles of the GDPR — protecting the privacy of individuals and ethical practices.
- On 10. juli 2020
Evgeny Morozov: The Geopolitics Of Artificial Intelligence
Artificial intelligence has rapidly emerged as a topic of immense interest not just for economists and entrepreneurs but also for observers and practitioners of ...
The Ethics and Governance of AI opening event, February 3, 2018
Chapter 1: 0:04 - Joi Ito Chapter 2: 1:03:27 - Jonathan Zittrain Chapter 3: 2:32:59 - Panel 1: Joi Ito moderates a panel with Pratik Shah, Karthik Dinakar, and ...
Darrell M. West – The Future of Work: Robots, AI, and Automation
Robots, artificial intelligence, and driverless cars no longer represent things of the future. They are with us and will become common in coming years, along with ...
The Hugh Thompson Show: Artificial Intelligence APJ Style
Hugh Thompson, RSA Conference Program Chair, RSA Conference Panelists: Dr Ayesha Khanna, Co-Founder and Chief Executive Officer, ADDO AI Mahmood ...
Shaping a 21st Century Workforce – Is AI Friend or Foe?
Visit: 0:30 - Introduction by Rui de Figueiredo 5:04 - Main Talk - Jennifer Granholm 51:18 - Audience Questions Jennifer Granholm, former ..
The Safety of AI: Risks, Regulations and Responsible Business
Artificial Intelligence is the very definition of a double-edged sword. With it we have the power to transform society, for better or for worse. What are some of the ...
China & the Internet: Looking In & Looking Out
Samm Sacks, Senior Fellow, Technology Policy Program, CSIS Moderator: Chris Merritt, Chief Revenue Officer, Cloudflare.
Keeping an Eye on AI with Dr. Kate Crawford
Episode 14 | February 28, 2018 Today, Dr. Crawford talks about both the promises and the problems of AI; why— when it comes to data – bigger isn't ...
Will You Still Have a Job When the Robots Arrive? AI and its Impact on the Workforce
Jason Furman, Professor of the Practice of Economic Policy, Harvard Kennedy School; Chairman, White House Council of Economic Advisors (2013-2017) ...