AI News, French startups players in Artificial Intelligence artificial intelligence

Strategic Forum on Artificial Intelligence

On the program: By participating in this Strategic Forum, you will also: BUY YOUR TICKETS 9:30 a.m. - Registration and networking coffee Welcome message and introduction 10:30 a.m. 10:45 a.m. - A busy year: developments and new players in AI 11 a.m. – Creating value with AI 11:15 a.m. – Strength in numbers: how to bring together the Montréal ecosystem 11:30 a.m. – Ethics and the responsible development of AI: necessity or opportunity 11:50 a.m. – Breaking down barriers between academia and the business world 12:05 p.m. – Question period 12:10 p.m. - Networking lunch 1:10 p.m. – Who owns data?

p.m. – Break and networking 3:15 p.m. – Busting the myths: when AI is good for work and for workers 3: 25 p.m. – Imagining the bank of the future 3:35 p.m. – Encouraging the creation of AI start-ups 3:50 p.m. – Exporting AI solutions internationally 4:00 p.m. – Becoming the world leaders in smart supply chains 4:20 p.m. – Question period 4:25 p.m. – Conclusion 4:55 p.m. – End of the Forum

History of artificial intelligence

The seeds of modern AI were planted by classical philosophers who attempted to describe the process of human thinking as the mechanical manipulation of symbols[citation needed].

In 1973, in response to the criticism from James Lighthill and ongoing pressure from congress, the U.S. and British Governments stopped funding undirected research into artificial intelligence, and the difficult years that followed would later be known as an 'AI winter'.

Seven years later, a visionary initiative by the Japanese Government inspired governments and industry to provide AI with billions of dollars, but by the late 80s the investors became disillusioned by the absence of the needed computer power (hardware) and withdrew funding again.

Investment and interest in AI boomed in the first decades of the 21st century, when machine learning was successfully applied to many problems in academia and industry due to the presence of powerful computer hardware.

The faithful believed that craftsman had imbued these figures with very real minds, capable of wisdom and emotion—Hermes Trismegistus wrote that 'by discovering the true nature of the gods, man has been able to reproduce it.'[10][11]

Their ideas were developed over the centuries by philosophers such as Aristotle (who gave a formal analysis of the syllogism), Euclid (whose Elements was a model of formal reasoning), al-Khwārizmī (who developed algebra and gave his name to 'algorithm') and European scholastic philosophers such as William of Ockham and Duns Scotus.[12]

Llull described his machines as mechanical entities that could combine basic and undeniable truths by simple logical operations, produced by the machine by mechanical meanings, in such ways as to produce all the possible knowledge.[14]

The Church-Turing thesis implied that a mechanical device, shuffling symbols as simple as 0 and 1, could imitate any conceivable process of mathematical deduction.

In the 1940s and 50s, a handful of scientists from a variety of fields (mathematics, psychology, engineering, economics and political science) began to discuss the possibility of creating an artificial brain.

The earliest research into thinking machines was inspired by a confluence of ideas that became prevalent in the late 1930s, 1940s, and early 1950s.

This simplified version of the problem allowed Turing to argue convincingly that a 'thinking machine' was at least plausible and the paper answered all the most common objections to the proposition.[30]

Arthur Samuel's checkers program, developed in the middle 50s and early 60s, eventually achieved sufficient skill to challenge a respectable amateur.[32]

When access to digital computers became possible in the middle fifties, a few scientists instinctively recognized that a machine that could manipulate numbers could also manipulate symbols and that the manipulation of symbols could well be the essence of human thought.

To achieve some goal (like winning a game or proving a theorem), they proceeded step by step towards it (by making a move or a deduction) as if searching through a maze, backtracking whenever they reached a dead end.

Other 'searching' programs were able to accomplish impressive tasks like solving problems in geometry and algebra, such as Herbert Gelernter's Geometry Theorem Prover (1958) and SAINT, written by Minsky's student James Slagle (1961).[49]

They pointed out that in successful sciences like physics, basic principles were often best understood using simplified models like frictionless planes or perfectly rigid bodies.

Its limb control system allowed it to walk with the lower limbs, and to grip and transport objects with hands, using tactile sensors.

One of the earliest was John Lucas, who argued that Gödel's incompleteness theorem showed that a formal system (such as a computer program) could never see the truth of certain statements, while a human being could.[90]

Hubert Dreyfus ridiculed the broken promises of the 1960s and critiqued the assumptions of AI, arguing that human reasoning actually involved very little 'symbol processing' and a great deal of embodied, instinctive, unconscious 'know how'.[91][92]

However, straightforward implementations, like those attempted by McCarthy and his students in the late 1960s, were especially intractable: the programs required astronomical numbers of steps to prove simple theorems.[100]

A more fruitful approach to logic was developed in the 1970s by Robert Kowalski at the University of Edinburgh, and soon this led to the collaboration with French researchers Alain Colmerauer and Philippe Roussel who created the successful logic programming language Prolog.[101] Prolog

In 1975, in a seminal paper, Minsky noted that many of his fellow 'scruffy' researchers were using the same kind of tool: a framework that captures all our common sense assumptions about something.

We know these facts are not always true and that deductions using these facts will not be 'logical', but these structured sets of assumptions are part of the context of everything we say and think.

In the 1980s a form of AI program called 'expert systems' was adopted by corporations around the world and knowledge became the focus of mainstream AI research.

An expert system is a program that answers questions or solves problems about a specific domain of knowledge, using logical rules that are derived from the knowledge of experts.

Expert systems restricted themselves to a small domain of specific knowledge (thus avoiding the commonsense knowledge problem) and their simple design made it relatively easy for programs to be built and then modified once they were in place.

'AI researchers were beginning to suspect—reluctantly, for it violated the scientific canon of parsimony—that intelligence might very well be based on the ability to use large amounts of diverse knowledge in different ways,'[112]

'[T]he great lesson from the 1970s was that intelligent behavior depended very much on dealing with knowledge, sometimes quite detailed knowledge, of a domain where a given task lay'.[113]

The 1980s also saw the birth of Cyc, the first attempt to attack the commonsense knowledge problem directly, by creating a massive database that would contain all the mundane facts that the average person knows.

Douglas Lenat, who started and led the project, argued that there is no shortcut ― the only way for machines to know the meaning of human concepts is to teach them, one concept at a time, by hand.

Their objectives were to write programs and build machines that could carry on conversations, translate languages, interpret pictures, and reason like human beings.[117]

In 1982, physicist John Hopfield was able to prove that a form of neural network (now called a 'Hopfield net') could learn and process information in a completely new way.

Around the same time, Geoffrey Hinton and David Rumelhart popularized a method for training neural networks called 'backpropagation', also known as the reverse mode of automatic differentiation published by Seppo Linnainmaa (1970) and applied to neural networks by Paul Werbos.

Neural networks would become commercially successful in the 1990s, when they began to be used as the engines driving programs like optical character recognition and speech recognition.[120][123]

The term 'AI winter' was coined by researchers who had survived the funding cuts of 1974 when they became concerned that enthusiasm for expert systems had spiraled out of control and that disappointment would certainly follow.[124]

They were difficult to update, they could not learn, they were 'brittle' (i.e., they could make grotesque mistakes when given unusual inputs), and they fell prey to problems (such as the qualification problem) that had been identified years earlier.

They believed that, to show real intelligence, a machine needs to have a body — it needs to perceive, move, survive and deal with the world.

They argued that these sensorimotor skills are essential to higher level skills like commonsense reasoning and that abstract reasoning was actually the least interesting or important human skill (see Moravec's paradox).

He rejected all symbolic approaches (both McCarthy's logic and Minsky's frames), arguing that AI needed to understand the physical machinery of vision from the bottom up before any symbolic processing took place.

robotics researcher Rodney Brooks took direct aim at the physical symbol system hypothesis, arguing that symbols are not always necessary since 'the world is its own best model.

In the 1980s and 1990s, many cognitive scientists also rejected the symbol processing model of the mind and argued that the body was essential for reasoning, a theory called the embodied mind thesis.[135]

Some of the success was due to increasing computer power and some was achieved by focusing on specific isolated problems and pursuing them with the highest standards of scientific accountability.

Inside the field there was little agreement on the reasons for AI's failure to fulfill the dream of human level intelligence that had captured the imagination of the world in the 1960s.

Together, all these factors helped to fragment AI into competing subfields focused on particular problems or approaches, sometimes even under new names that disguised the tarnished pedigree of 'artificial intelligence'.[136]

The super computer was a specialized version of a framework produced by IBM, and was capable of processing twice as many moves per second as it had during the first match (which Deep Blue had lost), reportedly 200,000,000 moves per second.

Two years later, a team from CMU won the DARPA Urban Challenge by autonomously navigating 55 miles in an Urban environment while adhering to traffic hazards and all traffic laws.[140]

These successes were not due to some revolutionary new paradigm, but mostly on the tedious application of engineering skill and on the tremendous increase in the speed and capacity of computer by the 90s.[142]

When the economist's definition of a rational agent was married to computer science's definition of an object or module, the intelligent agent paradigm was complete.

By this definition, simple programs that solve specific problems are 'intelligent agents', as are human beings and organizations of human beings, such as firms.

It provided a common language to describe problems and share their solutions with each other, and with other fields that also used concepts of abstract agents, like economics and control theory.

It was hoped that a complete agent architecture (like Newell's SOAR) would one day allow researchers to build more versatile and intelligent systems out of interacting intelligent agents.[146][148]

There was a widespread realization that many of the problems that AI needed to solve were already being worked on by researchers in fields like mathematics, economics or operations research.

The shared mathematical language allowed both a higher level of collaboration with more established and successful fields and the achievement of results which were measurable and provable;

Nick Bostrom explains 'A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore.'[160]

In the commercial world at least, the failed promises of the AI Winter continued to haunt AI research into the 2000s, as the New York Times reported in 2005: 'Computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers.'[161][162][163]

Minsky believed that the answer is that the central problems, like commonsense reasoning, were being neglected, while most researchers pursued things like commercial applications of neural nets or genetic algorithms.

Jeff Hawkins argued that neural net research ignores the essential properties of the human cortex, preferring simple models that have been successful at solving simple problems.[168]

In the first decades of the 21st century, access to large amounts of data (known as 'big data'), faster computers and advanced machine learning techniques were successfully applied to many problems throughout the economy.

In fact, McKinsey Global Institute estimated in their famous paper 'Big data: The next frontier for innovation, competition, and productivity' that 'by 2009, nearly all sectors in the US economy had at least an average of 200 terabytes of stored data'.

Advances in deep learning (particularly deep convolutional neural networks and recurrent neural networks) drove progress and research in image and video processing, text analysis, and even speech recognition.[172]

Deep learning is a branch of machine learning that models high level abstractions in data by using a deep graph with many processing layers.[172]

A common problem for recurrent neural networks is the vanishing gradient problem, which is where gradients passed between layers gradually shrink and literally disappear as they are rounded off to zero.

State-of-the-art deep neural network architectures can sometimes even rival human accuracy in fields like computer vision, specifically on things like the MNIST database, and traffic sign recognition.[174]

Language processing engines powered by smart search engines can easily beat humans at answering general trivia questions (such as IBM Watson), and recent developments in deep learning have produced astounding results in competing with humans, in things like Go and Doom (which, being a First-Person Shooter game, has sparked some controversy).[175][176][177][178]

Big data refers to a collection of data that cannot be captured, managed, and processed by conventional software tools within a certain time frame.

In other words, if big data is likened to an industry, the key to realizing profitability in this industry is to increase the “Process capability” of the data and realize the “Value added” of the data through “Processing”.

Artificial intelligence is a branch of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that responds in a manner similar to human intelligence.

or as the ability of a machine to perform 'general intelligent action'.[3] Academic sources reserve 'strong AI' to refer to machines capable of experiencing consciousness.

World Economic Forum on the Middle East and North Africa, Sweimeh, Jordan

an unique and open platform bringing together the finest digital players, with more than 70 partners as of today (startups and technology companies…) offering the best of digital and open innovation to Roland Berger's clients.

Axelle is a public figure in France – She has a wide experience in digital subjects and covers a broad spectrum in this field from data strategy, digital transformation, online privacy, to artificial intelligence ...

Graduate from Sciences Po Paris (1997), holding a Master's degree in international economic law (Paris II - 2000) and a LLM diploma (Kings College London - 2003), Axelle began her career in a law firm and as a researcher and teacher at University.

Artificial Intelligence Sino French TechMeeting

The international facility management services company founded more than 20 years ago in Vietnam and headquartered in Shanghai is joining the Artificial ...

Arya.AI Becomes First Indian Startup

Arya.ai, a Mumbai-based artificial intelligence (AI) startup has become the first Indian startup to be selected by Paris&Co, a French innovation agency, as one of ...

GIST TechConnect: Artificial Intelligence for Startups

On Tuesday, August 8, the U.S. Department of State hosted an interactive webchat featuring experts from Amazon Web Services sharing their thoughts on ...

Thales Artificial Intelligence goes BIG in Canada

3 Questions about Innovation to Patrick Albert, President of Hub France AI

What is the link between crazy toads and innovation? Find out by listening to Patrick Albert, President of Hub France AI who answered to our ...

The Magic of Startup Weekend, Artificial Intelligence edition in Montreal (2017)

Startup Weekend Artificial Intelligence, 2017 edition in Montreal, Canada. An overview of the weekend with key messages from organizers and pitch judges.

ICT2018 - Artificial Intelligence – the European way

Over the past few years, artificial intelligence (AI) has rapidly matured into a viable technology with profound implications for our society. To ensure that Europe ...

ICT2018 - Innovation+Startup Forum: Fostering the deep tech and fintech ecosystems in Europe

Leading European experts on blockchain, fintech and artificial intelligence are united to discuss the European tech ecosystem. Our future activities will be ...

Using Artificial Intelligence to make accurate medical predictions? Meet Owkin (Startup Spotlights)

Hear from Owkin, a healthcare startup based in Paris, France. COO and CFO Anna Bondarenko explains her startups purpose, and how the Launchpad ...

Vending machines get fresh start with high technology and unique ideas

New types of vending machines are appearing all the time in Korea. Some of them, equipped with never-before seen technology. According to our Lee ...