AI News, Nuit Blanche

Nuit Blanche

Optimization for Machine Learning NIPS*2008 Workshop December 12-13, 2008, Whistler, Canada URL: http://opt2008.kyb.tuebingen.mpg.de/

* Combinatorial Optimization, example problems in ML include - Estimating MAP solutions to discrete random fields - Clustering and graph-partitioning - Semi-supervised and multiple-instance learning - Feature and subspace selection

* Algorithms and Techniques, especially with a focus on an underlying application - Polyhedral combinatorics, polytopes and strong valid inequalities - Linear and higher-order relaxations - Semidefinite programming relaxations - Decomposition for large-scale, message-passing and online learning - Global and Lipschitz optimization - Algorithms for non-smooth optimization - Approximation Algorithms

Braunschweig Integrated Centre of Systems Biology (BRICS) Technische Universität Braunschweig Rebenring 56 38106 Braunschweig Germany can be found here.

- 11:00 Andreas Potschka, 'A sequential homotopy method for unconstrained optimization problems' 11:00

- 16:50 Nidhi Kaihnsa, 'Attainable Regions of Bio-Chemical Reactions' 16:50 - 17:00 Farewell Download schedule as pdf.

To advance research and collaboration between these fields the workshop aims to bring together scientists working at the intersection of the fields and scientists interested in contributing to the collaboration of the fields.

Besides the keynote talks we encourage young researchers to present their work in a 20 minute talk at the workshop.

We either seek work from the intersection of the fields, or presentations on new, pivotal questions of one field for the other.

Submissions will be chosen for presentation based on an internal review of the submitted extended abstracts.

NIPS 2015 Workshop (LeCun) 15599 Non-convex Optimization for Machine Learning: Theory and Practice

In general, reaching the global optima of these problems is NP-hard and in practice, local search methods such as gradient descent can get stuck in spurious local optima and suffer from poor convergence.

These algorithms are guaranteed to recover a consistent solution to parameter estimation problem in many latent variable models such as topic admixture models, HMMs, ICA, and most recently, even non-linear models such as neural networks.

lt br gt lt br gt As another example of guaranteed non-convex methods, there has been interest in the problem of dictionary learning, which involves expressing the observed data as a sparse combination of dictionary elements.

A recent work has established that the simple stochastic gradient descent (SGD) with appropriately added noise can escape the saddle points and converge to a local optimum in bounded time for a large class of nonconvex problems.

For example, many of these methods have shown great promise in diverse application domains such as natural language processing, social networks, health informatics, and biological sequence analysis.

On the practical side, conversations between theorists and practitioners can help identify what kind of conditions are reasonable for specific applications, and thus lead to the design of practically motivated algorithms for non-convex optimization with rigorous guarantees.

NIPS 2015 Workshop (Anandkumar) 15598 Non-convex Optimization for Machine Learning: Theory and ...

Non-convex optimization is ubiquitous in machine learning. In general, reaching the global optima of these problems is NP-hard and in practice, local search ...

Optimization for Machine Learning I

Elad Hazan, Princeton University Foundations of Machine Learning Boot Camp

What is Optimization? + Learning Gradient Descent | Two Minute Papers #82

Let's talk about what mathematical optimization is, how gradient descent can solve simpler optimization problems, and Google DeepMind's proposed algorithm ...

NIPS 2015 Workshop (LeCun) 15599 Non-convex Optimization for Machine Learning: Theory and Practice

Non-convex optimization is ubiquitous in machine learning. In general, reaching the global optima of these problems is NP-hard and in practice, local search ...

Francis Bach - Machine learning and optimization for massive data

Huawei-IHÉS Workshop on Mathematical Sciences Tuesday, May 5th 2015.

Distributed Optimization via Alternating Direction Method of Multipliers

Problems in areas such as machine learning and dynamic optimization on a large network lead to extremely large convex optimization problems, with problem ...

Mathematical Optimization for Machine Learning

Jeremy Watt, Reza Borhani In this talk we provide a user-friendly introduction to mathematical ..

Workshop: Solving optimization problems with JuliaOpt

Madeliene Udell, Miles Lubin, Iain Dunning, Joey Huchette. Visit to download Julia

Stanley Osher: "New Techniques in Optimization and Their Applications to Deep Learning..."

New Deep Learning Techniques 2018 "New Techniques in Optimization and Their Applications to Deep Learning and Related Inverse Problems" Stanley Osher ...

Gilles Louppe | Bayesian optimization with Scikit-Optimize

PyData Amsterdam 2017 You are given access to an espresso machine with many buttons and knobs to tweak. Your task is to brew the best cup of espresso ...