AI News, Simons Institute for the Theory of Computing

Simons Institute for the Theory of Computing

This program seeks to develop and apply algorithmic methods for the control of systems characterized by the need to make real-time decisions based on data arriving in high volume.

To this end, the program will create collaborations between two groups of experts: workers in domains of physical science, engineering and societal systems involving real-time discovery and inference, and mathematical and computational scientists with the tools required to attack the decision-theoretic problems arising in these domains.

Mathematical optimization

In mathematics, computer science and operations research, mathematical optimization or mathematical programming, alternatively spelled optimisation, is the selection of a best element (with regard to some criterion) from some set of available alternatives.[1] In the simplest case, an optimization problem consists of maximizing or minimizing a real function by systematically choosing input values from within an allowed set and computing the value of the function.

More generally, optimization includes finding 'best available' values of some objective function given a defined domain (or input), including a variety of different types of objective functions and different types of domains.

An optimization problem can be represented in the following way: Such a formulation is called an optimization problem or a mathematical programming problem (a term not directly related to computer programming, but still in use for example in linear programming – see History below).

Problems formulated using this technique in the fields of physics and computer vision may refer to the technique as energy minimization, speaking of the value of the function f as representing the energy of the system being modeled.

The function f is called, variously, an objective function, a loss function or cost function (minimization),[2] a utility function or fitness function (maximization), or, in certain fields, an energy function or energy functional.

In a convex problem, if there is a local minimum that is interior (not on the edge of the set of feasible points), it is also the global minimum, but a nonconvex problem may have more than one local minimum not all of which need be global minima.

large number of algorithms proposed for solving nonconvex problems—including the majority of commercially available solvers—are not capable of making a distinction between locally optimal solutions and globally optimal solutions, and will treat the former as actual solutions to the original problem.

Global optimization is the branch of applied mathematics and numerical analysis that is concerned with the development of deterministic algorithms that are capable of guaranteeing convergence in finite time to the actual optimal solution of a nonconvex problem.

(Programming in this context does not refer to computer programming, but from the use of program by the United States military to refer to proposed training and logistics schedules, which were the problems Dantzig studied at that time.) Dantzig published the Simplex algorithm in 1947, and John von Neumann developed the theory of duality in the same year.

Other major researchers in mathematical optimization include the following: In a number of subfields, the techniques are designed primarily for optimization in dynamic contexts (that is, decision making over time): Adding more than one objective to an optimization problem adds complexity.

design is judged to be 'Pareto optimal' (equivalently, 'Pareto efficient' or in the Pareto set) if it is not dominated by any other design: If it is worse than another design in some respects and no better in any respect, then it is dominated and is not Pareto optimal.

Classical optimization techniques due to their iterative approach do not perform satisfactorily when they are used to obtain multiple solutions, since it is not guaranteed that different solutions will be obtained even with different starting points in multiple runs of the algorithm.

One of Fermat's theorems states that optima of unconstrained problems are found at stationary points, where the first derivative or the gradient of the objective function is zero (see first derivative test).

When the objective function is twice differentiable, these cases can be distinguished by checking the second derivative or the matrix of second derivatives (called the Hessian matrix) in unconstrained problems, or the matrix of second derivatives of the objective function and the constraints called the bordered Hessian in constrained problems.

To solve problems, researchers may use algorithms that terminate in a finite number of steps, or iterative methods that converge to a solution (on some specified class of problems), or heuristics that may provide approximate solutions to some problems (although their iterates need not converge).

While evaluating Hessians (H) and gradients (G) improves the rate of convergence, for functions for which these quantities exist and vary sufficiently smoothly, such evaluations increase the computational complexity (or computational cost) of each iteration.

One major criterion for optimizers is just the number of required function evaluations as this often is already a large computational effort, usually much more effort than within the optimizer itself, which mainly has to operate over the N variables.

List of some well-known heuristics: Problems in rigid body dynamics (in particular articulated rigid body dynamics) often require mathematical programming techniques, since you can view rigid body dynamics as attempting to solve an ordinary differential equation on a constraint manifold;

One subset is the engineering optimization, and another recent and growing subset of this field is multidisciplinary design optimization, which, while useful in many problems, has in particular been applied to aerospace engineering problems.

This approach may be applied in cosmology and astrophysics,.[4] Economics is closely enough linked to optimization of agents that an influential definition relatedly describes economics qua science as the 'study of human behavior as a relationship between ends and scarce means' with alternative uses.[5] Modern optimization theory includes traditional optimization theory but also overlaps with game theory and the study of economic equilibria.

For example, microeconomists use dynamic search models to study labor-market behavior.[6] A crucial distinction is between deterministic and stochastic models.[7] Macroeconomists build dynamic stochastic general equilibrium (DSGE) models that describe the dynamics of the whole economy as the result of the interdependent optimizing decisions of workers, consumers, investors, and governments.[8][9] Some common applications of optimization techniques in electrical engineering include active filter design,[10] stray field reduction in superconducting magnetic energy storage systems, space mapping design of microwave structures,[11] handset antennas,[12][13][14] electromagnetics-based design.

Electromagnetically validated design optimization of microwave components and antennas has made extensive use of an appropriate physics-based or empirical surrogate model and space mapping methodologies since the discovery of space mapping in 1993 [15][16] Optimization has been widely used in civil engineering.

The most common civil engineering problems that are solved by optimization are cut and fill of roads, life-cycle analysis of structures and infrastructures,[17] resource leveling[18] and schedule optimization.

These algorithms run online and repeatedly determine values for decision variables, such as choke openings in a process plant, by iteratively solving a mathematical optimization problem including constraints and a model of the system to be controlled.

Methods to obtain suitable (in some sense) natural extensions of optimization problems that otherwise lack of existence or stability of solutions to obtain problems with guaranteed existence of solutions and their stability in some sense (typically under various perturbation of data) are in general called relaxation.

Control theory

Control theory in control systems engineering deals with the control of continuously operating dynamical systems in engineered processes and machines.

The objective is to develop a control model for controlling such systems using a control action in an optimum manner without delay or overshoot and ensuring control stability.

The difference between actual and desired value of the process variable, called the error signal, or SP-PV error, is applied as feedback to generate a control action to bring the controlled process variable to the same value as the set point.

This is feedback control, which is usually continuous and involves taking measurements using a sensor and making calculated adjustments to keep the measured variable within a set range by means of a 'final control element', such as a control valve.[1] Extensive use is usually made of a diagrammatic style known as the block diagram.

In it the transfer function, also known as the system function or network function, is a mathematical model of the relation between the input and output based on the differential equations describing the system.

Control theory dates from the 19th century, when the theoretical basis for the operation of governors was first described by James Clerk Maxwell.[2] Control theory was further advanced by Edward Routh in 1874, Charles Sturm and in 1895, Adolf Hurwitz, who all contributed to the establishment of control stability criteria;

and from 1922 onwards, the development of PID control theory by Nicolas Minorsky.[3] Although a major application of control theory is in control systems engineering, which deals with the design of process control systems for industry, other applications range far beyond this.

A few examples are in physiology, electronics, climate modeling, machine design, ecosystems, navigation, neural networks, predator–prey interaction, gene expression, and production theory.[4]

Although control systems of various types date back to antiquity, a more formal analysis of the field began with a dynamics analysis of the centrifugal governor, conducted by the physicist James Clerk Maxwell in 1868, entitled On Governors.[5] This described and analyzed the phenomenon of self-oscillation, in which lags in the system may lead to overcompensation and unstable behavior.

This generated a flurry of interest in the topic, during which Maxwell's classmate, Edward John Routh, abstracted Maxwell's results for the general class of linear systems.[6] Independently, Adolf Hurwitz analyzed system stability using differential equations in 1877, resulting in what is now known as the Routh–Hurwitz theorem.[7][8] A

The Wright brothers made their first successful test flights on December 17, 1903 and were distinguished by their ability to control their flights for substantial periods (more so than the ability to produce lift from an airfoil, which was known).

Irmgard Flügge-Lotz developed the theory of discontinuous automatic control systems, and applied the bang-bang principle to the development of automatic flight control equipment for aircraft.[9][10] Other areas of application for discontinuous controls included fire-control systems, guidance systems and electronics.

In contemporary vessels, they may be gyroscopically controlled active fins, which have the capacity to change their angle of attack to counteract roll caused by wind or waves acting on the ship.

In closed loop control, the control action from the controller is dependent on feed back from the process in the form of the value of the process variable (PV).

In the case of the boiler analogy, a closed loop would include a thermostat to compare the building temperature (PV) with the temperature set on the thermostat (the set point - SP).

A closed loop controller, therefore, has a feedback loop which ensures the controller exerts a control action to manipulate the process variable to be the same as the 'Reference input' or 'set point'.

For this reason, closed loop controllers are also called feedback controllers.[12] The definition of a closed loop control system according to the British Standard Institution is 'a control system possessing monitoring feedback, the deviation signal formed as a result of this feedback being used to control the action of a final control element in such a way as to tend to reduce the deviation to zero.' ' [13] Likewise;

'A Feedback Control System is a system which tends to maintain a prescribed relationship of one system variable to another by comparing functions of these variables and using the difference as a means of control.''[14] An example of a control system is a car's cruise control, which is a device designed to maintain vehicle speed at a constant desired or reference speed provided by the driver.

However, if the cruise control is engaged on a stretch of flat road, then the car will travel slower going uphill and faster when going downhill.

no measurement of the system output (the car's speed) is used to alter the control (the throttle position.) As a result, the controller cannot compensate for changes acting on the car, like a change in the slope of the road.

In a closed-loop control system, data from a sensor monitoring the car's speed (the system output) enters a controller which continuously compares the quantity representing the speed with the reference quantity representing the desired speed.

As the sensed speed drops below the reference, the difference increases, the throttle opens, and engine power increases, speeding up the vehicle.

The central idea of these control systems is the feedback loop, the controller affects the system output, which in turn is measured and fed back to the controller.

Its name comes from the information path in the system: process inputs (e.g., voltage applied to an electric motor) have an effect on the process outputs (e.g., speed or torque of the motor), which is measured with sensors and processed by the controller;

If we assume the controller C, the plant P, and the sensor F are linear and time-invariant (i.e., elements of their transfer function C(s), P(s), and F(s) do not depend on time), the systems above can be analysed using the Laplace transform on the variables.


















The numerator is the forward (open-loop) gain from r to y, and the denominator is one plus the gain in going around the feedback loop, the so-called loop gain.

is the tracking error, a PID controller has the general form The desired closed loop dynamics is obtained by adjusting the three parameters

Applying Laplace transformation results in the transformed PID controller equation with the PID controller transfer function As an example of tuning a PID controller in the closed-loop system






The field of control theory can be divided into two branches: Mathematical techniques for analyzing and designing control systems fall into two different categories: In contrast to the frequency domain analysis of the classical control theory, modern control theory utilizes the time-domain state space representation, a mathematical model of a physical system as a set of input, output and state variables related by first-order differential equations.

To abstract from the number of inputs, outputs, and states, the variables are expressed as vectors and the differential and algebraic equations are written in matrix form (the latter only being possible when the dynamical system is linear).

The state space representation (also known as the 'time-domain approach') provides a convenient and compact way to model and analyze systems with multiple inputs and outputs.

The state of the system can be represented as a point within that space.[17][18] Control systems can be divided into different categories depending on the number of inputs and outputs.

Practically speaking, stability requires that the transfer function complex poles reside The difference between the two cases is simply due to the traditional method of plotting continuous time versus discrete time transfer functions.

Permanent oscillations occur when a pole has a real part exactly equal to zero (in the continuous time case) or a modulus equal to one (in the discrete time case).

Cruise ships use antiroll fins that extend transversely from the side of the ship for perhaps 30 feet (10 m) and are continuously rotated about their axes to develop forces that oppose the roll.

Controllability and observability are main issues in the analysis of a system before deciding the best control strategy to be applied, or whether it is even possible to control or stabilize the system.

From a geometrical point of view, looking at the states of each variable of the system to be controlled, every 'bad' state of these variables must be controllable and observable to ensure a good behavior in the closed-loop system.

Unobservable poles are not present in the transfer function realization of a state-space representation, which is why sometimes the latter is preferred in dynamical systems analysis.

These vary from extremely general ones (PID controller), to others devoted to very particular classes of systems (especially robotics or aircraft cruise control).

These include the rise time (the time needed by the control system to reach the desired value after a perturbation), peak overshoot (the highest value reached by the response before reaching the desired value) and others (settling time, quarter-decay).

Typically a simpler mathematical model is chosen in order to simplify calculations, otherwise, the true system dynamics can be so complicated that a complete model is impossible.

Even assuming that a 'complete' model is used in designing the controller, all the parameters included in these equations (called 'nominal parameters') are never known with absolute precision;

In this way, if a drastic variation of the parameters ensues, for example, if the robot's arm releases a weight, the controller will adjust itself consequently in order to ensure the correct performance.

Analysis of the robustness of a SISO (single input single output) control system can be performed in the frequency domain, considering the system's transfer function and using Nyquist and Bode diagrams.

For MIMO systems, pole placement can be performed mathematically using a state space representation of the open-loop system and calculating a feedback matrix assigning poles in the desired positions.

In control theory it is sometimes possible to linearize such classes of systems and apply linear techniques, but in many cases it can be necessary to devise from scratch theories permitting control of nonlinear systems.

These, e.g., feedback linearization, backstepping, sliding mode control, trajectory linearization control normally take advantage of results based on Lyapunov's theory.

Differential geometry has been widely used as a tool for generalizing well-known linear control concepts to the non-linear case, as well as showing the subtleties that make it a more challenging problem.

Control theory has also been used to decipher the neural mechanism that directs cognitive states.[19] When the system is controlled by multiple controllers, the problem is one of decentralized control.

Open Positions in Cryptology

We are looking for a talent cryptography and distributed systems engineer to work on various aspects of the core IoTeX blockchain technologies, with emphasis on the design, analysis and implementation of innovative and efficient cryptographic algorithms and protocols that improve on the scalability, security and privacy of existing blockchain methodologies and pave the way for securing a wide range of Internet of Things (IoT) applications with IoTex blockchains.

We are looking for a talent cryptography and distributed systems engineer to work on various aspects of the core IoTeX blockchain technologies, with emphasis on the design, analysis and implementation of innovative and efficient cryptographic algorithms and protocols that improve on the scalability, security and privacy of existing blockchain methodologies and pave the way for securing a wide range of Internet of Things (IoT) applications with IoTex blockchains.

8. Web Security Model

MIT 6.858 Computer Systems Security, Fall 2014 View the complete course: Instructor: James Mickens In this lecture, Professor Mickens introduces the concept of..

9. Securing Web Applications

MIT 6.858 Computer Systems Security, Fall 2014 View the complete course: Instructor: James Mickens In this lecture, Professor Mickens continues looking at how to..

Find out everything you wanted to know about SHA-1 to SHA-2 migrations

You're already seeing SHA-1 hash warnings. You've basically got 2-14 months to get migrated, depending on your applications. Attend this session and learn everything about SHA-1 to SHA-2 migrations...

1. Introduction, Threat Models

MIT 6.858 Computer Systems Security, Fall 2014 View the complete course: Instructor: Nickolai Zeldovich In this lecture, Professor Zeldovich gives a brief overview..

SANS DFIR Webcast - APT Attacks Exposed: Network, Host, Memory, and Malware Analysis

For many years, professionals have been asking to see real APT data in a way that shows them how the adversaries compromise and maintain presence on our networks. Now you can experience it...

Building Brains to Understand the World's Data

Google Tech Talk February 12, 2013 (more info below) Presented by Jeff Hawkins. ABSTRACT The neocortex works on principles that are fundamentally different than traditional computers. In...

2. Models of Computation, Document Distance

MIT 6.006 Introduction to Algorithms, Fall 2011 View the complete course: Instructor: Erik Demaine License: Creative Commons BY-NC-SA More information at

William Oliver: "Quantum Engineering of Superconducting Qubits"

William Oliver visited the Google LA Quantum AI Lab on August 13, 2015. Abstract: Superconducting qubits are coherent artificial atoms assembled from electrical circuit elements. Their lithograph...

Domain Adaptation with Structural Correspondence Learning

Google Tech Talks September, 5 2007 ABSTRACT Statistical language processing tools are being applied to an ever-wider and more varied range of linguistic data. Researchers and engineers...

DEF CON 22 - Dr. Paul Vixie - Domain Name Problems and Solutions

Slides Here: White paper available for download here: