AI News, Sas forks extreme auto processor artificial intelligence

MacInTouch Recent News

Apple today announced a new MacBook Pro with 16-inch (diagonal) screen and new keyboard to replace the defectively designed keyboard plaguing its current laptop line, deleting the 15-inch model in the process (while continuing to sell its other laptops with bad keyboards).

The all-new MacBook Pro features a brilliant 16-inch Retina Display, the latest 8-core processors, up to 64GB of memory, next-generation graphics with up to 8GB of VRAM and a new advanced thermal design, making it the most powerful MacBook Pro ever.

The 16-inch MacBook Pro features a new Magic Keyboard with a refined scissor mechanism that delivers 1mm of key travel and a stable key feel, as well as an Apple-designed rubber dome that stores more potential energy for a responsive key press.

The sophisticated fan design features a larger impeller with extended blades along with bigger vents, resulting in a 28 percent increase in airflow, while the heat sink is 35 percent larger, enabling significantly more heat dissipation than before.

The 16-inch MacBook Pro features the latest 6- and 8-core 9th-generation processors with Turbo Boost speeds up to 5.0 GHz, which deliver up to 2.1 times faster performance than the quad-core 15-inch MacBook Pro.1 Its powerful CPUs, combined with faster memory up to 64GB for the first time, and its more advanced thermal design, will enable pro workflows never before possible on a MacBook Pro.

Artificial intelligence

As machines become increasingly capable, tasks considered to require 'intelligence' are often removed from the definition of AI, a phenomenon known as the AI effect.[3] A quip in Tesler's Theorem says 'AI is whatever hasn't been done yet.'[4] For instance, optical character recognition is frequently excluded from things considered to be AI, having become a routine technology.[5] Modern machine capabilities generally classified as AI include successfully understanding human speech,[6] competing at the highest level in strategic game systems (such as chess and Go),[7] autonomously operating cars, intelligent routing in content delivery networks, and military simulations.

'robotics' or 'machine learning'),[14] the use of particular tools ('logic' or artificial neural networks), or deep philosophical differences.[15][16][17] Subfields have also been based on social factors (particular institutions or the work of particular researchers).[13] The traditional problems (or goals) of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception and the ability to move and manipulate objects.[14] General intelligence is among the field's long-term goals.[18] Approaches include statistical methods, computational intelligence, and traditional symbolic AI.

Turing proposed changing the question from whether a machine was intelligent, to 'whether or not it is possible for machinery to show intelligent behaviour'.[27] The first work that is now generally recognized as AI was McCullouch and Pitts' 1943 formal design for Turing-complete 'artificial neurons'.[28] The field of AI research was born at a workshop at Dartmouth College in 1956,[29] where the term 'Artificial Intelligence' was coined by John McCarthy to distinguish the field from cybernetics and escape the influence of the cyberneticist Norbert Wiener.[30] Attendees Allen Newell (CMU), Herbert Simon (CMU), John McCarthy (MIT), Marvin Minsky (MIT) and Arthur Samuel (IBM) became the founders and leaders of AI research.[31] They and their students produced programs that the press described as 'astonishing':[32] computers were learning checkers strategies (c.

At the same time, Japan's fifth generation computer project inspired the U.S and British governments to restore funding for academic research.[9] However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer-lasting hiatus began.[11] In the late 1990s and early 21st century, AI began to be used for logistics, data mining, medical diagnosis and other areas.[23] The success was due to increasing computational power (see Moore's law), greater emphasis on solving specific problems, new ties between AI and other fields (such as statistics, economics and mathematics), and a commitment by researchers to mathematical methods and scientific standards.[40] Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov, on 11 May 1997.[41] In 2011, a Jeopardy!

data-hungry deep learning methods started to dominate accuracy benchmarks around 2012.[43] The Kinect, which provides a 3D body–motion interface for the Xbox 360 and the Xbox One, uses algorithms that emerged from lengthy AI research[44] as do intelligent personal assistants in smartphones.[45] In March 2016, AlphaGo won 4 out of 5 games of Go in a match with Go champion Lee Sedol, becoming the first computer Go-playing system to beat a professional Go player without handicaps.[7][46] In the 2017 Future of Go Summit, AlphaGo won a three-game match with Ke Jie,[47] who at the time continuously held the world No.

Clark also presents factual data indicating the improvements of AI since 2012 supported by lower error rates in image processing tasks.[50] He attributes this to an increase in affordable neural networks, due to a rise in cloud computing infrastructure and to an increase in research tools and datasets.[12] Other cited examples include Microsoft's development of a Skype system that can automatically translate from one language to another and Facebook's system that can describe images to blind people.[50] In a 2017 survey, one in five companies reported they had 'incorporated AI in some offerings or processes'.[51][52] Around 2016, China greatly accelerated its government funding;

given its large supply of data and its rapidly increasing research output, some observers believe it may be on track to becoming an 'AI superpower'.[53][54] However, it has been acknowledged that reports regarding artificial intelligence have tended to be exaggerated.[55][56][57] Computer science defines AI research as the study of 'intelligent agents': any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[1] A more elaborate definition characterizes AI as “a system’s ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation.”[58] A

If the AI is programmed for 'reinforcement learning', goals can be implicitly induced by rewarding some types of behavior or punishing others.[lower-alpha 1] Alternatively, an evolutionary system can induce goals by using a 'fitness function' to mutate and preferentially replicate high-scoring AI systems, similarly to how animals evolved to innately desire certain goals such as finding food.[59] Some AI systems, such as nearest-neighbor, instead of reason by analogy, these systems are not generally given goals, except to the degree that goals are implicit in their training data.[60] Such systems can still be benchmarked if the non-goal system is framed as a system whose 'goal' is to successfully accomplish its narrow classification task.[61] AI often revolves around the use of algorithms.

The traits described below have received the most attention.[14] Early researchers developed algorithms that imitated step-by-step reasoning that humans use when they solve puzzles or make logical deductions.[82] By the late 1980s and 1990s, AI research had developed methods for dealing with uncertain or incomplete information, employing concepts from probability and economics.[83] These algorithms proved to be insufficient for solving large reasoning problems, because they experienced a 'combinatorial explosion': they became exponentially slower as the problems grew larger.[63] In fact, even humans rarely use the step-by-step deduction that early AI research was able to model.

Such formal knowledge representations can be used in content-based indexing and retrieval,[93] scene interpretation,[94] clinical decision support,[95] knowledge discovery (mining 'interesting' and actionable inferences from large databases),[96] and other areas.[97] Among the most difficult problems in knowledge representation are: Intelligent agents must be able to set goals and achieve them.[104] They need a way to visualize the future—a representation of the state of the world and be able to make predictions about how their actions will change it—and be able to make choices that maximize the utility (or 'value') of available choices.[105] In classical planning problems, the agent can assume that it is the only system acting in the world, allowing the agent to be certain of the consequences of its actions.[106] However, if the agent is not the only actor, then it requires that the agent can reason under uncertainty.

a giant, fifty-meter-tall pedestrian far away may produce exactly the same pixels as a nearby normal-sized pedestrian, requiring the AI to judge the relative likelihood and reasonableness of different interpretations, for example by using its 'object model' to assess that fifty-meter pedestrians do not exist.[121] AI is heavily used in robotics.[122] Advanced robotic arms and other industrial robots, widely used in modern factories, can learn from experience how to move efficiently despite the presence of friction and gear slippage.[123] A modern mobile robot, when given a small, static, and visible environment, can easily determine its location and map its environment;

the paradox is named after Hans Moravec, who stated in 1988 that 'it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility'.[127][128] This is attributed to the fact that, unlike checkers, physical dexterity has been a direct target of natural selection for millions of years.[129] Moravec's paradox can be extended to many forms of social intelligence.[131][132] Distributed multi-agent coordination of autonomous vehicles remains a difficult problem.[133] Affective computing is an interdisciplinary umbrella that comprises systems which recognize, interpret, process, or simulate human affects.[134][135][136] Moderate successes related to affective computing include textual sentiment analysis and, more recently, multimodal affect analysis (see multimodal sentiment analysis), wherein AI classifies the affects displayed by a videotaped subject.[137] In the long run, social skills and an understanding of human emotion and game theory would be valuable to a social agent.

Nowadays, the vast majority of current AI researchers work instead on tractable 'narrow AI' applications (such as medical diagnosis or automobile navigation).[140] Many researchers predict that such 'narrow AI' work in different individual domains will eventually be incorporated into a machine with artificial general intelligence (AGI), combining most of the narrow skills mentioned in this article and at some point even exceeding human ability in most or all these areas.[18][141] Many advances have general, cross-domain significance.

One high-profile example is that DeepMind in the 2010s developed a 'generalized artificial intelligence' that could learn many diverse Atari games on its own, and later developed a variant of the system which succeeds at sequential learning.[142][143][144] Besides transfer learning,[145] hypothetical AGI breakthroughs could include the development of reflective architectures that can engage in decision-theoretic metareasoning, and figuring out how to 'slurp up' a comprehensive knowledge base from the entire unstructured Web.[6] Some argue that some kind of (currently-undiscovered) conceptually straightforward, but mathematically difficult, 'Master Algorithm' could lead to AGI.[146] Finally, a few 'emergent' approaches look to simulating human intelligence extremely closely, and believe that anthropomorphic features like an artificial brain or simulated child development may someday reach a critical point where general intelligence emerges.[147][148] Many of the problems in this article may also require general intelligence, if machines are to solve the problems as well as people do.

This tradition, centered at Carnegie Mellon University would eventually culminate in the development of the Soar architecture in the middle 1980s.[153][154] Unlike Simon and Newell, John McCarthy felt that machines did not need to simulate human thought, but should instead try to find the essence of abstract reasoning and problem-solving, regardless whether people used the same algorithms.[15] His laboratory at Stanford (SAIL) focused on using formal logic to solve a wide variety of problems, including knowledge representation, planning and learning.[155] Logic was also the focus of the work at the University of Edinburgh and elsewhere in Europe which led to the development of the programming language Prolog and the science of logic programming.[156] Researchers at MIT (such as Marvin Minsky and Seymour Papert)[157] found that solving difficult problems in vision and natural language processing required ad-hoc solutions—they argued that there was no simple and general principle (like logic) that would capture all the aspects of intelligent behavior.

Roger Schank described their 'anti-logic' approaches as 'scruffy' (as opposed to the 'neat' paradigms at CMU and Stanford).[16] Commonsense knowledge bases (such as Doug Lenat's Cyc) are an example of 'scruffy' AI, since they must be built by hand, one complicated concept at a time.[158] When computers with large memories became available around 1970, researchers from all three traditions began to build knowledge into AI applications.[159] This 'knowledge revolution' led to the development and deployment of expert systems (introduced by Edward Feigenbaum), the first truly successful form of AI software.[39] A key component of the system architecture for all expert systems is the knowledge base, which stores facts and rules that illustrate AI.[160] The knowledge revolution was also driven by the realization that enormous amounts of knowledge would be required by many simple AI applications.

Within developmental robotics, developmental learning approaches are elaborated upon to allow robots to accumulate repertoires of novel skills through autonomous self-exploration, social interaction with human teachers, and the use of guidance mechanisms (active learning, maturation, motor synergies, etc.).[162][163][164][165] Interest in neural networks and 'connectionism' was revived by David Rumelhart and others in the middle of the 1980s.[166] Artificial neural networks are an example of soft computing—they are solutions to problems which cannot be solved with complete logical certainty, and where an approximate solution is often sufficient.

For example, logical proof can be viewed as searching for a path that leads from premises to conclusions, where each step is the application of an inference rule.[179] Planning algorithms search through trees of goals and subgoals, attempting to find a path to a target goal, a process called means-ends analysis.[180] Robotics algorithms for moving limbs and grasping objects use local searches in configuration space.[123] Many learning algorithms use search algorithms based on optimization.

AI researchers have devised a number of powerful tools to solve these problems using methods from probability theory and economics.[198] Bayesian networks[199] are a very general tool that can be used for various problems: reasoning (using the Bayesian inference algorithm),[200] learning (using the expectation-maximization algorithm),[lower-alpha 6][202] planning (using decision networks)[203] and perception (using dynamic Bayesian networks).[204] Probabilistic algorithms can also be used for filtering, prediction, smoothing and finding explanations for streams of data, helping perception systems to analyze processes that occur over time (e.g., hidden Markov models or Kalman filters).[204] Compared with symbolic logic, formal Bayesian inference is computationally expensive.

Precise mathematical tools have been developed that analyze how an agent can make choices and plan, using decision theory, decision analysis,[205] and information value theory.[105] These tools include models such as Markov decision processes,[206] dynamic decision networks,[204] game theory and mechanism design.[207] The simplest AI applications can be divided into two types: classifiers ('if shiny then diamond') and controllers ('if shiny then pick up').

Among the most popular feedforward networks are perceptrons, multi-layer perceptrons and radial basis networks.[224] Neural networks can be applied to the problem of intelligent control (for robotics) or learning, using such techniques as Hebbian learning ('fire together, wire together'), GMDH or competitive learning.[225] Today, neural networks are often trained by the backpropagation algorithm, which had been around since 1970 as the reverse mode of automatic differentiation published by Seppo Linnainmaa,[226][227] and was introduced to neural networks by Paul Werbos.[228][229][230] Hierarchical temporal memory is an approach that models some of the structural and algorithmic properties of the neocortex.[231] To summarize, most neural networks use some form of gradient descent on a hand-created neural topology.

Over the last few years, advances in both machine learning algorithms and computer hardware have led to more efficient methods for training deep neural networks that contain many layers of non-linear hidden units and a very large output layer.[242] Deep learning often uses convolutional neural networks (CNNs), whose origins can be traced back to the Neocognitron introduced by Kunihiko Fukushima in 1980.[243] In 1989, Yann LeCun and colleagues applied backpropagation to such an architecture.

thus, an RNN is an example of deep learning.[233] RNNs can be trained by gradient descent[248][249][250] but suffer from the vanishing gradient problem.[234][251] In 1992, it was shown that unsupervised pre-training of a stack of recurrent neural networks can speed up subsequent supervised learning of deep sequential problems.[252] Numerous researchers now use variants of a deep learning recurrent NN called the long short-term memory (LSTM) network published by Hochreiter &

There is no consensus on how to characterize which tasks AI tends to excel at.[263] While projects such as AlphaZero have succeeded in generating their own knowledge from scratch, many other machine learning projects require large training datasets.[264][265] Researcher Andrew Ng has suggested, as a 'highly imperfect rule of thumb', that 'almost anything a typical human can do with less than one second of mental thought, we can probably now or in the near future automate using AI.'[266] Moravec's paradox suggests that AI lags humans at many tasks that the human brain has specifically evolved to perform well.[129] Games provide a well-publicized benchmark for assessing rates of progress.

this phenomenon is described as the AI effect.[277] High-profile examples of AI include autonomous vehicles (such as drones and self-driving cars), medical diagnosis, creating art (such as poetry), proving mathematical theorems, playing games (such as Chess or Go), search engines (such as Google search), online assistants (such as Siri), image recognition in photographs, spam filtering, predicting flight delays,[278] prediction of judicial decisions,[279] targeting online advertisements, [276][280][281] and energy storage[282] With social media sites overtaking TV as a source for news for young people and news organizations increasingly reliant on social media platforms for generating distribution,[283] major publishers now use artificial intelligence (AI) technology to post stories more effectively and generate higher volumes of traffic.[284] AI in healthcare is often used for classification, whether to automate initial evaluation of a CT scan or EKG or to identify high risk patients for population health.

Another study was reported to have found that artificial intelligence was as good as trained doctors in identifying skin cancers.[287] Another study is using artificial intelligence to try to monitor multiple high-risk patients, and this is done by asking each patient numerous questions based on data acquired from live doctor to patient interactions.[288] One study was done with transfer learning, the machine performed a diagnosis similarly to a well-trained ophthalmologist, and could generate a decision within 30 seconds on whether or not the patient should be referred for treatment, with more than 95% accuracy.[289] According to CNN, a recent study by surgeons at the Children's National Medical Center in Washington successfully demonstrated surgery with an autonomous robot.

However, Google has been working on an algorithm with the purpose of eliminating the need for pre-programmed maps and instead, creating a device that would be able to adjust to a variety of new surroundings.[296] Some self-driving cars are not equipped with steering wheels or brake pedals, so there has also been research focused on creating an algorithm that is capable of maintaining a safe environment for the passengers in the vehicle through awareness of speed and driving conditions.[297] Another factor that is influencing the ability for a driver-less automobile is the safety of the passenger.

RPA uses artificial intelligence to train and teach software robots to process transactions, monitor compliance and audit processes automatically.[307] The use of AI machines in the market in applications such as online trading and decision making has changed major economic theories.[308] For example, AI based buying and selling platforms have changed the law of supply and demand in that it is now possible to easily estimate individualized demand and supply curves and thus individualized pricing.

Intelligence technologies enables coordination of sensors and effectors, threat detection and identification, marking of enemy positions, target acquisition, coordination and deconfliction of distributed Join Fires between networked combat vehicles and tanks also inside Manned and Unmanned Teams (MUM-T).[315] Worldwide annual military spending on robotics rose from US$5.1 billion in 2010 to US$7.5 billion in 2015.[316][317] Military drones capable of autonomous action are widely considered a useful asset.[318] Many artificial intelligence researchers seek to distance themselves from military applications of AI.[319] For financial statements audit, AI makes continuous audit possible.

The potential benefit would be the overall audit risk will be reduced, the level of assurance will be increased and the time duration of audit will be reduced.[320] It is possible to use AI to predict or generalize the behavior of customers from their digital footprints in order to target them with personalized promotions or build customer personas automatically.[321] A documented case reports that online gambling companies were using AI to improve customer targeting.[322] Moreover, the application of Personality computing AI models can help reducing the cost of advertising campaigns by adding psychological targeting to more traditional sociodemographic or behavioral targeting.[323] Artificial Intelligence has inspired numerous creative applications including its usage to produce visual art.

Recent exhibitions showcasing the usage of AI to produce art include the Google-sponsored benefit and auction at the Gray Area Foundation in San Francisco, where artists experimented with the deepdream algorithm[325] and the exhibition 'Unhuman: Art in the Age of AI,' which took place in Los Angeles and Frankfurt in the fall of 2017.[326][327] In the spring of 2018, the Association of Computing Machinery dedicated a special magazine issue to the subject of computers and art highlighting the role of machine learning in the arts.[328] The Austrian Ars Electronica and Museum of Applied Arts, Vienna opened exhibitions on AI in 2019.[329][330] The Ars Electronica's 2019 festival 'Out of the box' extensively thematized the role of arts for a sustainable societal transformation with AI.[331] There are three philosophical questions related to AI:

A group of prominent tech titans including Peter Thiel, Amazon Web Services and Musk have committed $1billion to OpenAI, a nonprofit company aimed at championing responsible AI development.[348] The opinion of experts within the field of artificial intelligence is mixed, with sizable fractions both concerned and unconcerned by risk from eventual superhumanly-capable AI.[349] Other technology industry leaders believe that artificial intelligence is helpful in its current form and will continue to assist humans.

Musk also funds companies developing artificial intelligence such as Google DeepMind and Vicarious to 'just keep an eye on what's going on with artificial intelligence.[352] I think there is potentially a dangerous outcome there.'[353][354] For this danger to be realized, the hypothetical AI would have to overpower or out-think all of humanity, which a minority of experts argue is a possibility far enough in the future to not be worth researching.[355][356] Other counterarguments revolve around humans being either intrinsically or convergently valuable from the perspective of an artificial intelligence.[357] Joseph Weizenbaum wrote that AI applications cannot, by definition, successfully simulate genuine human empathy and that the use of AI technology in fields such as customer service or psychotherapy[358] was deeply misguided.

Algorithms have a host of applications in today's legal system already, assisting officials ranging from judges to parole officers and public defenders in gauging the predicted likelihood of recidivism of defendants.[361] COMPAS (an acronym for Correctional Offender Management Profiling for Alternative Sanctions) counts among the most widely utilized commercially available solutions.[361] It has been suggested that COMPAS assigns an exceptionally elevated risk of recidivism to black defendants while, conversely, ascribing low risk estimate to white defendants significantly more often than statistically expected.[361] The relationship between automation and employment is complicated.

for example, Michael Osborne and Carl Benedikt Frey estimate 47% of U.S. jobs are at 'high risk' of potential automation, while an OECD report classifies only 9% of U.S. jobs as 'high risk'.[364][365][366] Jobs at extreme risk range from paralegals to fast food cooks, while job demand is likely to increase for care-related professions ranging from personal healthcare to the clergy.[367] Author Martin Ford and others go further and argue that many jobs are routine, repetitive and (to an AI) predictable;

Research in this area includes machine ethics, artificial moral agents, friendly AI and discussion towards building a human rights framework is also in talks.[369] Wendell Wallach introduced the concept of artificial moral agents (AMA) in his book Moral Machines[370] For Wallach, AMAs have become a part of the research landscape of artificial intelligence as guided by its two central questions which he identifies as 'Does Humanity Want Computers Making Moral Decisions'[371] and 'Can (Ro)bots Really Be Moral'.[372] For Wallach the question is not centered on the issue of whether machines can demonstrate the equivalent of moral behavior in contrast to the constraints which society may place on the development of AMAs.[373] The field of machine ethics is concerned with giving machines ethical principles, or a procedure for discovering a way to resolve the ethical dilemmas they might encounter, enabling them to function in an ethically responsible manner through their own ethical decision making.[374] The field was delineated in the AAAI Fall 2005 Symposium on Machine Ethics: 'Past research concerning the relationship between technology and ethics has largely focused on responsible and irresponsible use of technology by human beings, with a few people being interested in how human beings ought to treat machines.

Because the capabilities of such an intelligence may be impossible to comprehend, the technological singularity is an occurrence beyond which events are unpredictable or even unfathomable.[386][141] Ray Kurzweil has used Moore's law (which describes the relentless exponential improvement in digital technology) to calculate that desktop computers will have the same processing power as human brains by the year 2029, and predicts that the singularity will occur in 2045.[386] Robot designer Hans Moravec, cyberneticist Kevin Warwick and inventor Ray Kurzweil have predicted that humans and machines will merge in the future into cyborgs that are more capable and powerful than either.[387] This idea, called transhumanism, has roots in Aldous Huxley and Robert Ettinger.