AI News, Will the future of work be ethical? Founder perspectives

Artificial intelligence

In computer science, artificial intelligence (AI), sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans.

Leading AI textbooks define the field as the study of 'intelligent agents': any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[1]

Colloquially, the term 'artificial intelligence' is often used to describe machines (or computers) that mimic 'cognitive' functions that humans associate with the human mind, such as 'learning' and 'problem solving'.[2]

The traditional problems (or goals) of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception and the ability to move and manipulate objects.[14]

Many tools are used in AI, including versions of search and mathematical optimization, artificial neural networks, and methods based on statistics, probability and economics.

In the twenty-first century, AI techniques have experienced a resurgence following concurrent advances in computer power, large amounts of data, and theoretical understanding;

and AI techniques have become an essential part of the technology industry, helping to solve many challenging problems in computer science, software engineering and operations research.[24][12]

The study of mathematical logic led directly to Alan Turing's theory of computation, which suggested that a machine, by shuffling symbols as simple as '0' and '1', could simulate any conceivable act of mathematical deduction.

The success was due to increasing computational power (see Moore's law), greater emphasis on solving specific problems, new ties between AI and other fields (such as statistics, economics and mathematics), and a commitment by researchers to mathematical methods and scientific standards.[41]

According to Bloomberg's Jack Clark, 2015 was a landmark year for artificial intelligence, with the number of software projects that use AI Google increased from a 'sporadic usage' in 2012 to more than 2,700 projects.

He attributes this to an increase in affordable neural networks, due to a rise in cloud computing infrastructure and to an increase in research tools and datasets.[12]

Computer science defines AI research as the study of 'intelligent agents': any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[1]

A more elaborate definition characterizes AI as “a system’s ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation.”[59]

An AI's intended utility function (or goal) can be simple ('1 if the AI wins a game of Go, 0 otherwise') or complex ('Do mathematically similar actions to the ones succeeded in the past').

Alternatively, an evolutionary system can induce goals by using a 'fitness function' to mutate and preferentially replicate high-scoring AI systems, similar to how animals evolved to innately desire certain goals such as finding food.[60]

Some AI systems, such as nearest-neighbor, instead of reason by analogy, these systems are not generally given goals, except to the degree that goals are implicit in their training data.[61]

Some of the 'learners' described below, including Bayesian networks, decision trees, and nearest-neighbor, could theoretically, (given infinite data, time, and memory) learn to approximate any function, including which combination of mathematical functions would best describe the world[citation needed].

In practice, it is almost never possible to consider every possibility, because of the phenomenon of 'combinatorial explosion', where the amount of time needed to solve a problem grows exponentially.

The third major approach, extremely popular in routine business AI applications, are analogizers such as SVM and nearest-neighbor: 'After examining the records of known past patients whose temperature, symptoms, age, and other factors mostly match the current patient, X% of those patients turned out to have influenza'.

A fourth approach is harder to intuitively understand, but is inspired by how the brain's machinery works: the artificial neural network approach uses artificial 'neurons' that can learn by comparing itself to the desired output and altering the strengths of the connections between its internal neurons to 'reinforce' connections that seemed to be useful.

Therefore, according to Occam's razor principle, a learner must be designed such that it prefers simpler theories to complex theories, except in cases where the complex theory is proven substantially better.

Many systems attempt to reduce overfitting by rewarding a theory in accordance with how well it fits the data, but penalizing the theory in accordance with how complex the theory is.[69]

A toy example is that an image classifier trained only on pictures of brown horses and black cats might conclude that all brown patches are likely to be horses.[70]

instead, they learn abstract patterns of pixels that humans are oblivious to, but that linearly correlate with images of certain types of real objects.

Humans also have a powerful mechanism of 'folk psychology' that helps them to interpret natural-language sentences such as 'The city councilmen refused the demonstrators a permit because they advocated violence'.

For example, existing self-driving cars cannot reason about the location nor the intentions of pedestrians in the exact way that humans do, and instead must use non-human modes of reasoning to avoid accidents.[79][80][81]

By the late 1980s and 1990s, AI research had developed methods for dealing with uncertain or incomplete information, employing concepts from probability and economics.[84]

These algorithms proved to be insufficient for solving large reasoning problems, because they experienced a 'combinatorial explosion': they became exponentially slower as the problems grew larger.[64]

In addition, some projects attempt to gather the 'commonsense knowledge' known to the average person into a database containing extensive knowledge about the world.

by acting as mediators between domain ontologies that cover specific knowledge about a particular knowledge domain (field of interest or area of concern).

They need a way to visualize the future—a representation of the state of the world and be able to make predictions about how their actions will change it—and be able to make choices that maximize the utility (or 'value') of available choices.[106]

A sufficiently powerful natural language processing system would enable natural-language user interfaces and the acquisition of knowledge directly from human-written sources, such as newswire texts.

Modern statistical NLP approaches can combine all these strategies as well as others, and often achieve acceptable accuracy at the page or paragraph level, but continue to lack the semantic understanding required to classify isolated sentences well.

Besides the usual difficulties with encoding semantic commonsense knowledge, existing semantic NLP sometimes scales too poorly to be viable in business applications.

is the ability to use input from sensors (such as cameras (visible spectrum or infrared), microphones, wireless signals, and active lidar, sonar, radar, and tactile sensors) to deduce aspects of the world.

a giant, fifty-meter-tall pedestrian far away may produce exactly the same pixels as a nearby normal-sized pedestrian, requiring the AI to judge the relative likelihood and reasonableness of different interpretations, for example by using its 'object model' to assess that fifty-meter pedestrians do not exist.[122]

Advanced robotic arms and other industrial robots, widely used in modern factories, can learn from experience how to move efficiently despite the presence of friction and gear slippage.[124]

the paradox is named after Hans Moravec, who stated in 1988 that 'it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility'.[128][129]

Moderate successes related to affective computing include textual sentiment analysis and, more recently, multimodal affect analysis (see multimodal sentiment analysis), wherein AI classifies the affects displayed by a videotaped subject.[138]

Some computer systems mimic human emotion and expressions to appear more sensitive to the emotional dynamics of human interaction, or to otherwise facilitate human–computer interaction.[139]

These early projects failed to escape the limitations of non-quantitative symbolic logic models and, in retrospect, greatly underestimated the difficulty of cross-domain AI.

Many researchers predict that such 'narrow AI' work in different individual domains will eventually be incorporated into a machine with artificial general intelligence (AGI), combining most of the narrow skills mentioned in this article and at some point even exceeding human ability in most or all these areas.[18][142]

One high-profile example is that DeepMind in the 2010s developed a 'generalized artificial intelligence' that could learn many diverse Atari games on its own, and later developed a variant of the system which succeeds at sequential learning.[143][144][145]

hypothetical AGI breakthroughs could include the development of reflective architectures that can engage in decision-theoretic metareasoning, and figuring out how to 'slurp up' a comprehensive knowledge base from the entire unstructured Web.[6]

Finally, a few 'emergent' approaches look to simulating human intelligence extremely closely, and believe that anthropomorphic features like an artificial brain or simulated child development may someday reach a critical point where general intelligence emerges.[148][149]

For example, even specific straightforward tasks, like machine translation, require that a machine read and write in both languages (NLP), follow the author's argument (reason), know what is being talked about (knowledge), and faithfully reproduce the author's original intent (social intelligence).

A problem like machine translation is considered 'AI-complete', because all of these problems need to be solved simultaneously in order to reach human-level machine performance.

When access to digital computers became possible in the mid 1950s, AI research began to explore the possibility that human intelligence could be reduced to symbol manipulation.

in the 1960s and the 1970s were convinced that symbolic approaches would eventually succeed in creating a machine with artificial general intelligence and considered this the goal of their field.

Economist Herbert Simon and Allen Newell studied human problem-solving skills and attempted to formalize them, and their work laid the foundations of the field of artificial intelligence, as well as cognitive science, operations research and management science.

Their research team used the results of psychological experiments to develop programs that simulated the techniques that people used to solve problems.

Unlike Simon and Newell, John McCarthy felt that machines did not need to simulate human thought, but should instead try to find the essence of abstract reasoning and problem-solving, regardless whether people used the same algorithms.[15]

His laboratory at Stanford (SAIL) focused on using formal logic to solve a wide variety of problems, including knowledge representation, planning and learning.[156]

found that solving difficult problems in vision and natural language processing required ad-hoc solutions—they argued that there was no simple and general principle (like logic) that would capture all the aspects of intelligent behavior.

When computers with large memories became available around 1970, researchers from all three traditions began to build knowledge into AI applications.[160]

By the 1980s, progress in symbolic AI seemed to stall and many believed that symbolic systems would never be able to imitate all the processes of human cognition, especially perception, robotics, learning and pattern recognition.

This coincided with the development of the embodied mind thesis in the related field of cognitive science: the idea that aspects of the body (such as movement, perception and visualization) are required for higher intelligence.

Within developmental robotics, developmental learning approaches are elaborated upon to allow robots to accumulate repertoires of novel skills through autonomous self-exploration, social interaction with human teachers, and the use of guidance mechanisms (active learning, maturation, motor synergies, etc.).[163][164][165][166]

Artificial neural networks are an example of soft computing—they are solutions to problems which cannot be solved with complete logical certainty, and where an approximate solution is often sufficient.

Much of traditional GOFAI got bogged down on ad hoc patches to symbolic computation that worked on their own toy models but failed to generalize to real-world results.

However, around the 1990s, AI researchers adopted sophisticated mathematical tools, such as hidden Markov models (HMM), information theory, and normative Bayesian decision theory to compare or to unify competing architectures.

Compared with GOFAI, new 'statistical learning' techniques such as HMM and neural networks were gaining higher levels of accuracy in many practical domains such as data mining, without necessarily acquiring a semantic understanding of the datasets.

The increased successes with real-world data led to increasing emphasis on comparing different approaches against shared test data to see which approach performed best in a broader context than that provided by idiosyncratic toy models;

In AGI research, some scholars caution against over-reliance on statistical learning, and argue that continuing research into GOFAI will still be necessary to attain general intelligence.[171][172]

Planning algorithms search through trees of goals and subgoals, attempting to find a path to a target goal, a process called means-ends analysis.[181]

In some search methodologies heuristics can also serve to entirely eliminate some choices that are unlikely to lead to a goal (called 'pruning the search tree').

These algorithms can be visualized as blind hill climbing: we begin the search at a random point on the landscape, and then, by jumps or steps, we keep moving our guess uphill, until we reach the top.

For example, they may begin with a population of organisms (the guesses) and then allow them to mutate and recombine, selecting only the fittest to survive each generation (refining the guesses).

Two popular swarm algorithms used in search are particle swarm optimization (inspired by bird flocking) and ant colony optimization (inspired by ant trails).[186][187]

Fuzzy set theory assigns a 'degree of truth' (between 0 and 1) to vague statements such as 'Alice is old' (or rich, or tall, or hungry) that are too linguistically imprecise to be completely true or false.

Fuzzy logic is successfully used in control systems to allow experts to contribute vague rules such as 'if you are close to the destination station and moving fast, increase the train's brake pressure';

Probabilistic algorithms can also be used for filtering, prediction, smoothing and finding explanations for streams of data, helping perception systems to analyze processes that occur over time (e.g., hidden Markov models or Kalman filters).[203]

Complicated graphs with diamonds or other 'loops' (undirected cycles) can require a sophisticated method such as Markov chain Monte Carlo, which spreads an ensemble of random walkers throughout the Bayesian network and attempts to converge to an assessment of the conditional probabilities.

Classifier performance depends greatly on the characteristics of the data to be classified, such as the dataset size, distribution of samples across classes, the dimensionality, and the level of noise.

Otherwise, if no matching model is available, and if accuracy (rather than speed or scalability) is the sole concern, conventional wisdom is that discriminative classifiers (especially SVM) tend to be more accurate than model-based classifiers such as 'naive Bayes' on most practical data sets.[219][220]

A simple 'neuron' N accepts input from other neurons, each of which, when activated (or 'fired'), cast a weighted 'vote' for or against whether neuron N should itself activate.

one simple algorithm (dubbed 'fire together, wire together') is to increase the weight between two connected neurons when the activation of one triggers the successful activation of another.

In the 2010s, advances in neural networks using deep learning thrust AI into widespread public consciousness and contributed to an enormous upshift in corporate AI spending;

The main categories of networks are acyclic or feedforward neural networks (where the signal passes in only one direction) and recurrent neural networks (which allow feedback and short-term memories of previous input events).

Neural networks can be applied to the problem of intelligent control (for robotics) or learning, using such techniques as Hebbian learning ('fire together, wire together'), GMDH or competitive learning.[225]

However, some research groups, such as Uber, argue that simple neuroevolution to mutate new neural network topologies and weights may be competitive with sophisticated gradient descent approaches[citation needed].

For example, a feedforward network with six hidden layers can learn a seven-link causal chain (six hidden layers + output layer) and has a 'credit assignment path' (CAP) depth of seven[citation needed].

Deep learning has transformed many important subfields of artificial intelligence[why?], including computer vision, speech recognition, natural language processing and others.[234][235][233]

In 2006, a publication by Geoffrey Hinton and Ruslan Salakhutdinov introduced another way of pre-training many-layered feedforward neural networks (FNNs) one layer at a time, treating each layer in turn as an unsupervised restricted Boltzmann machine, then using supervised backpropagation for fine-tuning.[241]

Over the last few years, advances in both machine learning algorithms and computer hardware have led to more efficient methods for training deep neural networks that contain many layers of non-linear hidden units and a very large output layer.[242]

In 1992, it was shown that unsupervised pre-training of a stack of recurrent neural networks can speed up subsequent supervised learning of deep sequential problems.[252]

The most common areas of competition include general machine intelligence, conversational behavior, data-mining, robotic cars, and robot soccer as well as conventional games.[271]

The 'imitation game' (an interpretation of the 1950 Turing test that assesses whether a computer can imitate a human) is nowadays considered too exploitable to be a meaningful benchmark.[272]

High-profile examples of AI include autonomous vehicles (such as drones and self-driving cars), medical diagnosis, creating art (such as poetry), proving mathematical theorems, playing games (such as Chess or Go), search engines (such as Google search), online assistants (such as Siri), image recognition in photographs, spam filtering, predicting flight delays,[278]

With social media sites overtaking TV as a source for news for young people and news organizations increasingly reliant on social media platforms for generating distribution,[283]

In 2016, a ground breaking study in California found that a mathematical formula developed with the help of AI correctly determined the accurate dose of immunosuppressant drugs to give to organ patients.[286]

Another study is using artificial intelligence to try to monitor multiple high-risk patients, and this is done by asking each patient numerous questions based on data acquired from live doctor to patient interactions.[289]

One study was done with transfer learning, the machine performed a diagnosis similarly to a well-trained ophthalmologist, and could generate a decision within 30 seconds on whether or not the patient should be referred for treatment, with more than 95% accuracy.[290]

The team supervised the robot while it performed soft-tissue surgery, stitching together a pig's bowel during open surgery, and doing so better than a human surgeon, the team claimed.[291]

However, Google has been working on an algorithm with the purpose of eliminating the need for pre-programmed maps and instead, creating a device that would be able to adjust to a variety of new surroundings.[297]

Some self-driving cars are not equipped with steering wheels or brake pedals, so there has also been research focused on creating an algorithm that is capable of maintaining a safe environment for the passengers in the vehicle through awareness of speed and driving conditions.[298]

Financial institutions have long used artificial neural network systems to detect charges or claims outside of the norm, flagging these for human investigation.

For example, AI based buying and selling platforms have changed the law of supply and demand in that it is now possible to easily estimate individualized demand and supply curves and thus individualized pricing.

Other theories where AI has had impact include in rational choice, rational expectations, game theory, Lewis turning point, portfolio optimization and counterfactual thinking[citation needed]..

This system will involve use of cameras to ascertain traffic density and accordingly calculate the time needed to clear the traffic volume which will determine the signal duration for vehicular traffic across streets.[313]

Intelligence technologies enables coordination of sensors and effectors, threat detection and identification, marking of enemy positions, target acquisition, coordination and deconfliction of distributed Join Fires between networked combat vehicles and tanks also inside Manned and Unmanned Teams (MUM-T).[317]

It is possible to use AI to predict or generalize the behavior of customers from their digital footprints in order to target them with personalized promotions or build customer personas automatically.[323]

Moreover, the application of Personality computing AI models can help reducing the cost of advertising campaigns by adding psychological targeting to more traditional sociodemographic or behavioral targeting.[325]

Irakli Beridze, Head of the Centre for Artificial Intelligence and Robotics at UNICRI, United Nations, has expressed that 'I think the dangerous applications for AI, from my point of view, would be criminals or large terrorist organizations using it to disrupt large processes or simply do pure harm.

He argues that sufficiently intelligent AI, if it chooses actions based on achieving some goal, will exhibit convergent behavior such as acquiring resources or protecting itself from being shut down.

If this AI's goals do not reflect humanity's—one example is an AI told to compute as many digits of pi as possible—it might harm humanity in order to acquire more resources or prevent itself from being shut down, ultimately to better achieve its goal.

For this danger to be realized, the hypothetical AI would have to overpower or out-think all of humanity, which a minority of experts argue is a possibility far enough in the future to not be worth researching.[358][359]

Algorithms have a host of applications in today's legal system already, assisting officials ranging from judges to parole officers and public defenders in gauging the predicted likelihood of recidivism of defendants.[364]

It has been suggested that COMPAS assigns an exceptionally elevated risk of recidivism to black defendants while, conversely, ascribing low risk estimate to white defendants significantly more often than statistically expected.[364]

Jobs at extreme risk range from paralegals to fast food cooks, while job demand is likely to increase for care-related professions ranging from personal healthcare to the clergy.[370]

Research in this area includes machine ethics, artificial moral agents, friendly AI and discussion towards building a human rights framework is also in talks.[372]

The field of machine ethics is concerned with giving machines ethical principles, or a procedure for discovering a way to resolve the ethical dilemmas they might encounter, enabling them to function in an ethically responsible manner through their own ethical decision making.[377]

The field was delineated in the AAAI Fall 2005 Symposium on Machine Ethics: 'Past research concerning the relationship between technology and ethics has largely focused on responsible and irresponsible use of technology by human beings, with a few people being interested in how human beings ought to treat machines.

In contrast to computer hacking, software property issues, privacy issues and other topics normally ascribed to computer ethics, machine ethics is concerned with the behavior of machines towards human users and other machines.

Research in machine ethics is key to alleviating concerns with autonomous systems—it could be argued that the notion of autonomous machines without such a dimension is at the root of all fear concerning machine intelligence.

Humans should not assume machines or robots would treat us favorably because there is no a priori reason to believe that they would be sympathetic to our system of morality, which has evolved along with our particular biology (which AIs would not share).

I think the worry stems from a fundamental error in not distinguishing the difference between the very real recent advances in a particular aspect of AI, and the enormity and complexity of building sentient volitional intelligence.'[380]

The philosophical position that John Searle has named 'strong AI' states: 'The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.'[383]

Technological singularity is when accelerating progress in technologies will cause a runaway effect wherein artificial intelligence will exceed human intellectual capacity and control, thus radically changing or even ending civilization.

Ray Kurzweil has used Moore's law (which describes the relentless exponential improvement in digital technology) to calculate that desktop computers will have the same processing power as human brains by the year 2029, and predicts that the singularity will occur in 2045.[389]

A survey of economists showed disagreement about whether the increasing use of robots and AI will cause a substantial increase in long-term unemployment, but they generally agree that it could be a net benefit, if productivity gains are redistributed.[392]

In the 1980s, artist Hajime Sorayama's Sexy Robots series were painted and published in Japan depicting the actual organic human form with lifelike muscular metallic skins and later 'the Gynoids' book followed that was used by or influenced movie makers including George Lucas and other creatives.

Sorayama never considered these organic robots to be real part of nature but always unnatural product of the human mind, a fantasy existing in the mind even when realized in actual form.

Artificial Intelligence: Implications for Ethics and Religion

New technologies are transforming our world every day, and the pace of change is only accelerating.  In coming years, human beings will create machines capable of out-thinking us and potentially taking on such uniquely-human traits as empathy, ethical reasoning, perhaps even consciousness.  This will have profound implications for virtually every human activity, as well as the meaning we impart to life and creation themselves.  This conference will provide an introduction for non-specialists to Artificial Intelligence (AI):  What is it?  What can it do and be used for?  And what will be its implications for choice and free will;

and the comparative intelligence and capabilities of humans and machines in the future?  Leading practitioners, ethicists and theologians will provide cross-disciplinary and cross-denominational perspectives on such challenges as technology addiction, inherent biases and resulting inequalities, the ethics of creating destructive technologies and of turning decision-making over to machines from self-driving cars to “autonomous weapons” systems in warfare, and how we should treat the suffering of “feeling”

The Riverside Church is an interdenominational, interracial, international, open, welcoming, and affirming church and congregation that has served as a focal point of global and national activism for peace and social justice since its inception and continues to serve God through word and public witness.

A leading figure in debates about post-modernism, Taylor has written on topics ranging from philosophy, religion, literature, art and architecture to education, media, science, technology and economics.

In the early 2000s his focus shifted to computer ethics, and in 2004 he published a textbook, Ethics for the Information Age, that explores moral problems related to modern uses of information technology, such as privacy, intellectual property rights, computer security, software reliability, and the relationship between automation and unemployment.

His work is focused on the ethics of technology, including such topics as AI and ethics, the ethics of technological manipulation of humans, the ethics of mitigation of and adaptation towards risky emerging technologies, and various aspects of the impact of technology and engineering on human life and society, including the relationship of technology and religion (particularly the Catholic Church).

She is the author of several books including Trauma and Grace and, most recently, her memoir Call It Grace: Finding Meaning in a Fractured World. Jones, a popular public speaker, is sought by media to comment on major issues impacting society because of her deep grounding in theology, politics, women’s studies, economics, race studies, history, and ethics.

Since taking office in 2007, Chancellor Eisen has transformed the education of religious, pedagogical, professional, and lay leaders for North American Jewry, with a focus on graduating highly skilled, innovative leaders who bring Judaism alive in ways that speak authentically to Jews at a time of rapid and far-reaching change.

Meredith Whittaker

Whittaker co-founded M-Lab, a globally distributed network measurement system that provides the world’s largest source of open data on internet performance.

She has advised the White House, the FCC, the City of New York, the European Parliament, and many other governments and civil society organizations on artificial intelligence, internet policy, measurement, privacy, and security.[14]

They produce annual reports that examine the social implications of artificial intelligence, including bias, rights and liberties.[19][20]

In 2018, Whittaker was one of the core organizers of the Google Walkouts, with over 20,000 Google employees walking out internationally to shed light on Google's dismissive culture when it comes to claims of sexual misconduct and citizen surveillance.[23]

Whittaker claimed that she faced retaliation from Google, and wrote in an open letter that she had been told to 'abandon her work' on enforcing ethics in technology at the AI Now Institute.[23][28][29]

In a note shared internally following her resignation, Whittaker called for tech workers to 'unionize in a way that works, protect conscientious objectors and whistleblowers, demand to know what you’re working on and how it’s used, and to build solidarity with other tech workers beyond your company.'[31]

Companies Embrace AI, but Senior Executives Cite Challenges on Alignment, Ethics

HR Technology News: Staffing Employment Growth Slows in Third Quarter The research is from a new GLG survey of more than 160 C-suite executives across three sectors – financial services, healthcare, and consulting – conducted between September 9 and November 13, 2019.

For example, the survey found that: HR Technology News: AceUp Cofounders Will Foussier and Layla Lynn Honored by Forbes 30 Under 30 for their Company’s Support of Millennials and Gen Z in the Workplace The vast majority of corporate leaders believe that AI use will keep growing: 90% or more of executives in healthcare, consulting, and financial services believe AI will transform their businesses within the next five years and 98% plan to continue or expand their AI development.

Along with quantitative data, the report contains comments from survey respondents reflecting the mixed perspectives of executives on AI’s future: “The data from our GLG survey is among the first from a broad cross section of senior executives,”

Concerned about the impacts of data misuse? Ways to get involved with the USF Center for Applied Data Ethics

“I really do think [nbdev] is a huge step forward for programming environments”: Chris Lattner, inventor of Swift, LLVM, and Swift Playgrounds.

And so forth… We believe that the very process of exploration is valuable in itself, and that this process should be saved so that other programmers (including yourself in six months time) can see what happened and learn by example.

They switch to get features like good doc lookup, good syntax highlighting, integration with unit tests, and (critically!) the ability to produce final, distributable source code files, as opposed to notebooks or REPL histories.

To support this kind of exploration, nbdev is built on top of Jupyter Notebook (which also means we get much better support for Python’s dynamic features than in a normal editor or IDE), and adds the following critically important tools for software development: Here’s a snippet from our actual “source code” for nbdev, which is itself written in nbdev (of course!) As you can see, when you build software this way, everyone in your project team gets to benefit from the work you do in building an understanding of the problem domain, such as file formats, performance characteristics, API edge cases, and so forth.

First, let’s talk about a little history… (And if you’re not interested in the history, you can skip ahead to What’s missing in Jupyter Notebook.) Most software development tools are not built from the foundations of thinking about exploratory programming.

It seemed to me at the time that this approach, where an entire software system would be defined in minute detail upfront, and then coded as closely to the specification as possible, did not fit at all well with how I actually got work done.

He describes it as “a methodology that combines a programming language with a documentation language, thereby making programs more robust, more portable, more easily maintained, and arguably more fun to write than programs that are written only in a high-level language.

Nearly 30 years later another brilliant and revolutionary thinker, Bret Victor, expressed his deep discontent for the current generation of development tools, and described how to design “a programming system for understanding programs”.

As he said in his groundbreaking speech “Inventing on Principle”: “Our current conception of what a computer program is — a list of textual definitions that you hand to a compiler — that’s derived straight from Fortran and ALGOL in the late ‘50’s.

(JavaScript front-end programming is however increasingly borrowing ideas from those approaches, such as hot reloading and in-browser live editing.) Matlab, for instance, started out as an entirely interactive tool back in the 1970’s, and today is still widely used in engineering, biology, and various other areas (it also provides regular software development features nowadays).

To do this, it used a “notebook” interface, which behaved a lot like a traditional REPL, but also allowed other types of information to be included, including charts, images, formatted text, outlining sections, and so forth.

In the end though, Mathematica didn’t really help me build anything useful, because I couldn’t distribute my code or applications to colleagues (unless they spent thousands of dollars for a Mathematica license to use it), and I couldn’t easily create web applications for people to access from the browser.

This used the same basic notebook interface as Mathematica (although, at first, with a small subset of the functionality) but was open source, and allowed me to write in languages that were widely supported and freely available.

Many students have found that the ability to experiment with inputs and view intermediate results and outputs, as well as try out their own modifications, helped them to more fully and deeply understand the topics being discussed.

We are also writing a book entirely using Jupyter Notebooks, which has been an absolute pleasure, allowing us to combine prose, code examples, hierarchical structured headings, and so forth, whilst ensuring that our sample outputs (including charts, tables, and images) always correctly match up to the code examples.

For instance, it doesn’t really provide a way to do things like: Because of this, people generally have to switch between a mix of poorly integrated tools, with significant friction as they move from tool to tool, to get the advantages of each: We decided that the best way to handle these things was to leverage great tools that already exist, where possible, and build our own where needed.

With this command, nbdev will simply use your cell outputs where there are conflicts in outputs, and if there are conflicts in cell inputs, then both cells are included in the final notebook, along with conflict markers so you can easily find them and fix them directly in Jupyter.

For instance, you can add methods to a class at any time, you can change the way that classes are created and how they work by using the metaclass system, and you can change how functions and methods behave by using decorators.

However, with a truly dynamic language like python, such information will always just be guesses, since actually providing the correct information would require running the python code itself (which it can’t really do, for all kinds of reasons - for instance the code may be in a state while you’re writing it that actually deletes all your files!) On the other hand, a notebook contains an actual running Python interpreter instance that you’re fully in control of.

Will the future of work be ethical Founder perspectives

Will the future of work be ethical Founder perspectives In June, Ethicist in Residence Greg M. Epstein attended EmTech Next, a conference organized by the MIT ...

AI Ethics and the Future of Work (CXOTalk #319)

Read the full transcript: As artificial intelligence grows in importance, how should we address the impact of ..

AI and Ethics: People, Robots and Society

Jack Clark, strategy and communications director at OpenAI, Milind Tambe, founding co-director of the Center for Artificial Intelligence in Society at the University ...

AI, Ethics, and the Value Alignment Problem with Meia Chita-Tegmark and Lucas Perry

What does it mean to create beneficial artificial intelligence? How can we expect to align AIs with human values if humans can't even agree on what we value?

Amir Husain: "The Sentient Machine: The Coming Age of Artificial Intelligence" | Talks at Google

The Sentient Machine addresses broad existential questions surrounding the coming of AI: Why are we valuable? What can we create in this world? How are we ...

The Future (and Past) of AI - Kathryn Hume (integrate.ai)

Startupfest 2017 The history of AI in the 20th century was like the weather in Montreal: beautiful, hopeful summers followed were followed by long, depressing ...

PANEL DISCUSSION | The Future of Work in a World of Robotics & Advanced AI | Vested Summit 2018

Speakers: Liza Lichtinger, Owner @ Future Design Station De Kai, Professor @HKUST Moderator: Ahmed Alaa, CEO @Sudotech “Let's bring back humanity to ...

The Future of Work in the AI Age

Experts anticipate that somewhere between 75 million and 375 million people may need to switch occupational categories by 2030. Businesses must prepare ...

The Age of Artificial Intelligence - The Rise of the Thinking Machines - Debate @ EdTechXEurope 2016

The future of artificial intelligence and its role in education was discussed at EdTechXEurope in London on 16th June 2016. Jozef Misik, MD of Knowable ...

Is Viv the AI that will enslave us all?