AI News, Artificial Intelligence for Computational Sustainability: A Lab Companion/Constraint-Based Reasoning and Optimization

Artificial Intelligence for Computational Sustainability: A Lab Companion/Constraint-Based Reasoning and Optimization

Many problems in sustainability require moving beyond the classic goal-based search methods of the previous chapter, to include knowledge of hard constraints that cannot be violated, as well as preferences for one solution state over others.

Optimization problems assume that some solutions are preferable to others -- that there is a partial ordering of quality over solution (or goal) states, and it is desirable to find the best or optimal of these states according to an objective function that is used to evaluate the quality of solution states.

Factors that are included in the objective function that scores the quality of different solutions (i.e., corridors) can include the monetary cost of the land purchased for differing corridors -- if purchase cost were the only factor, then cheaper corridors (that satisfied the hard constraints) would be preferable.

In the grizzly bear corridor design problem, human encroachment may be a more likely eventuality for some corridors than for others, or climate change may result in the change of habitat for one of the grizzlies favorite foods causing more grizzlies to stray.

As another example, consider the placement of windfarms in a country such as China (see Powell and colleagues (ref)), where optimization techniques are used to find best placement of wind farms based on current wind trends, but these trends may evolve with climate change rendering the windfarm placement no longer optimal.

Notice as well that if the shortest distance isn’t at\(x = 0\) there will be two points on the graph, as we’ve shown above, that will give the shortest distance.

Or, If you think about the situation here it makes sense that the point that minimizes the distance will also minimize the square of the distance and so since it will be easier to work with we will use the square of the distance and minimize that.

So, it looks like there are three critical points for the square of the distance and notice that this time, unlike pretty much every previous example we’ve worked, we can’t exclude zero or negative numbers.

That says to evaluate the function at the endpoints and the critical points and in this case, even though we’ve excluded it we’ll need to include \(x = 0\) since it is a critical point in the region.

- \frac{1}{\sqrt{2}}\) the function is decreasing until it hits \(x = - \frac{1}{\sqrt{2}}\) and so must always be larger than the function at \(x = - \frac{1}{\sqrt{2}}\).

0\) and so the function is always increasing to the right of \(x = - \frac{1}{\sqrt{2}}\) and so must be larger than the function at \(x = - \frac{1}{\sqrt{2}}\).

So, the points on the graph that are closest to \(\left( {0,2} \right)\) are, This solution method shows how tricky it can be to know that we have absolute extrema when there are multiple critical points and none of the methods discussed in the last section will work.

Doing this gives, There is now a single critical point, \(y = {\frac{3}{2}}\), and since the second derivative is always positive we know that this point must give the absolute minimum.

Constrained optimization

In mathematical optimization, constrained optimization (in some contexts called constraint optimization) is the process of optimizing an objective function with respect to some variables in the presence of constraints on those variables.

The objective function is either a cost function or energy function, which is to be minimized, or a reward function or utility function, which is to be maximized.

Constraints can be either hard constraints, which set conditions for the variables that are required to be satisfied, or soft constraints, which have some variable values that are penalized in the objective function if, and based on the extent that, the conditions on the variables are not satisfied.





{\displaystyle h_{j}(\mathbf {x} )\geq d_{j}~\mathrm {for~} j=1,\ldots ,m}

In some problems, often called constraint optimization problems, the objective function is actually the sum of cost functions, each of which penalizes the extent (if any) to which a soft constraint (a constraint which is preferred but not required to be satisfied) is violated.

However, search steps taken by the unconstrained method may be unacceptable for the constrained problem, leading to a lack of convergence.

If the constrained problem has only equality constraints, the method of Lagrange multipliers can be used to convert it into an unconstrained problem whose number of variables is the original number of variables plus the original number of equality constraints.

Alternatively, if the constraints are all equality constraints and are all linear, they can be solved for some of the variables in terms of the others, and the former can be substituted out of the objective function, leaving an unconstrained problem in a smaller number of variables.

With inequality constraints, the problem can be characterized in terms of the geometric optimality conditions, Fritz John conditions and Karush–Kuhn–Tucker conditions, under which simple problems may be solvable.

If the objective function and all of the hard constraints are linear and some hard constraints are inequalities, then the problem is a linear programming problem.

This can be solved by the simplex method, which usually works in polynomial time in the problem size but is not guaranteed to, or by interior point methods which are guaranteed to work in polynomial time.

If all the hard constraints are linear and some are inequalities, but the objective function is quadratic, the problem is a quadratic programming problem.

These are backtracking algorithms storing the cost of the best solution found during execution and using it to avoid part of the search.

More precisely, whenever the algorithm encounters a partial solution that cannot be extended to form a solution of better cost than the stored best cost, the algorithm backtracks, instead of trying to extend this solution.

Assuming that cost is to be minimized, the efficiency of these algorithms depends on how the cost that can be obtained from extending a partial solution is evaluated.

The lower the estimated cost, the better the algorithm, as a lower estimated cost is more likely to be lower than the best cost of solution found so far.

On the other hand, this estimated cost cannot be lower than the effective cost that can be obtained by extending the solution, as otherwise the algorithm could backtrack while a solution better than the best found so far exists.

As a result, the algorithm requires an upper bound on the cost that can be obtained from extending a partial solution, and this upper bound should be as small as possible.

It is exact because the maximal values of soft constraints may derive from different evaluations: a soft constraint may be maximal for











Virtually, this corresponds on ignoring the evaluated variables and solving the problem on the unassigned ones, except that the latter problem has already been solved.

More precisely, the cost of soft constraints containing both assigned and unassigned variables is estimated as above (or using an arbitrary other method);

the cost of soft constraints containing only unassigned variables is instead estimated using the optimal solution of the corresponding problem, which is already known at this point.

A given variable can be indeed removed from the problem by replacing all soft constraints containing it with a new soft constraint.





Robust Optimization for Pose Graph SLAM

Visit our website: Current state of the art solutions of the SLAM problem are based on ..

How megacities are changing the map of the world | Parag Khanna

"I want you to reimagine how life is organized on earth," says global strategist Parag Khanna. As our expanding cities grow ever more connected through ...

Overwatch World Cup USA 2018 - Day 2

How to get one-meter location-accuracy from Android devices (Google I/O '18)

Recent changes in industry standards and practices enable up to 10x location-accuracy improvements. This talk will explain when and how developers can get ...

Parag Khanna (English subt) - How megacities are changing the map of the world/TED Talks

"I want you to reimagine how life is organized on earth," says global strategist Parag Khanna. As our expanding cities grow ever more connected through ...

Winning The DARPA Grand Challenge

Google TechTalks August 2, 2006 Sebastian Thrun ABSTRACT The DARPA grand challenge, technical details enabling Sebastian Thrun's win, and an ...

Parimal Kopardekar: "Unmanned Aerial System Traffic Management System" | Talks at Google

Parimal "PK" Kopardekar is developing a system to safely enable low altitude unmanned aerial system (UAS) operations. The system is referred to as UAS ...

Joshua Prince-Ramus: Designing the Seattle Central Library

Architect Joshua Prince-Ramus takes the audience on dazzling, dizzying virtual tours of three recent projects: the Central Library in Seattle, ..

Robust, Visual-Inertial State Estimation: from Frame-based to Event-based Cameras

I will present the main algorithms to achieve robust, 6-DOF, state estimation for mobile robots using passive sensing. Since cameras alone are not robust ...

2. Requirements Definition

MIT 16.842 Fundamentals of Systems Engineering, Fall 2015 View the complete course: Instructor: Olivier de Weck In this lecture, ..