AI News, natural language processing blog

natural language processing blog

can consume, especially when that needs statistical modeling of various inputs, is (IMO) a more important and harder problem than details about flow algorithms.Also, I'll note that someone should have called me out for comparing flow (a halfway-through-the-semester topic in an upper division ALG course) with decision trees (probably the first thing you'd do in an ML course).

in the CLR(S) sense.@Balazs: I disagree for essentially the same reasons I agree with Sasho: I'm much more interested in what students should be able to do than what they "need."@Suresh: I'll reply on your blog, but essentially an elaboration of what I said above.@Aaron: I don't think I propose that people shouldn't be able to study the formal properties of algorithms, just that we shouldn't lose sight of the fact that algorithms are an abstraction and most of the work is figuring out how to map your real world problem to something you can solve.

natural language processing blog

I did pretty well in programming contests (ACM world finalist) which are basically 80% algorithms and 20% coding.And then I got to Google, and I was thrown onto a ranking/prediction task, and I flailed very badly at first until my project failed.

The principal concern I'd have is that, without a foundation in traditional algorithms, one might not know how to analyze machine learning algorithms or how to formalize the problems to be approximated.We all know (well, maybe) that closed-form solutions don't typically generalize to messy data, but I tend to view algorithms as an introduction to proofs in the context of computer science and a means of gaining an intuition about complexity.As someone relatively new to the modeling culture, one of the biggest hurdles for me, I think, was (and perhaps still is) adjusting to thinking of problems purely in terms of the model.

The student does something ridiculous because the student has little intuition about the fact that the three nested for loops being called on every input are a terrible idea, or just how much more terrible each new loop makes the complexity of the program.

Personally, I found Theory of Computing and Algorithms to be helpful in learning to visualize not only complexity, but things like state spaces and complex traversals.I think it'd be great ML is were common in undergraduate curricula, but I think that Algorithms should probably stay.

Yes, maybe your final goal is hard to define, and your data is dirty, but after you decide on a methodology, and do the modeling, you still end up with a series of basic computational problems, and you still need to solve them.

An algorithms class teaches you how to reason about the latter, but there is no reason you shouldn't prove a margin-based bound on the convergence rate of the perceptron algorithm, study the convergence properties of gradient descent more generally (or multiplicative weights), and analyze Adaboost.

There is a good case to be made that algorithms classes should be updated to have less focus on classical discrete optimization problems, and to have more focus on convex optimization problems, but I don't see the case at all for eliminating the formal study of algorithms.

can consume, especially when that needs statistical modeling of various inputs, is (IMO) a more important and harder problem than details about flow algorithms.Also, I'll note that someone should have called me out for comparing flow (a halfway-through-the-semester topic in an upper division ALG course) with decision trees (probably the first thing you'd do in an ML course).

in the CLR(S) sense.@Balazs: I disagree for essentially the same reasons I agree with Sasho: I'm much more interested in what students should be able to do than what they "need."@Suresh: I'll reply on your blog, but essentially an elaboration of what I said above.@Aaron: I don't think I propose that people shouldn't be able to study the formal properties of algorithms, just that we shouldn't lose sight of the fact that algorithms are an abstraction and most of the work is figuring out how to map your real world problem to something you can solve.

(because it is greatly improved recently.) So, if the data is noisy, again you can add some term to compensate it and the theoretical runtime will not lose there.(http://arxiv.org/abs/1312.6713)Both algorithms is within 1 years and hence it is not taught in class yet.I am just want to point out the (theoretical) fastest algorithm for many combinatorial optimization problems recently have been partially replaced by an convex optimization algorithm + carefully chosen preconditioner.

Even if Carlos exists, it will take him time to do both parts well, and time usually means money, and so my claim is that even if he knows bar, he's better off putting his time/energy/money into foo.@Anon2: I don't think that was the main point I was trying to make.

hal/anon1 - it will actually take less time (and mental anguish) for Carlos to put in a call to a shortest-path algorithm from a library than for Akiko to hack up some greedy shortest-path algorithm.learning is about leverage - learn once, use many times.

with basic algorithms the leverage is huge, since you encounter the same models/algorithms your whole life.hal/anon2 - if you go back and re-read your post, you will see that your whole post is written from the point of view of what the students will need in real life.

If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

How statistics lost their power, and why we should fear what comes next - podcast

In the UK, a research project by Cambridge University and YouGov looking at conspiracy theories discovered that 55% of the population believes that the government “is hiding the truth about the number of immigrants living here”.

Either the state continues to make claims that it believes to be valid and is accused by sceptics of propaganda, or else, politicians and officials are confined to saying what feels plausible and intuitively true, but may ultimately be inaccurate.

The declining authority of statistics is at the heart of the crisis that has become known as “post-truth” politics The declining authority of statistics – and the experts who analyse them – is at the heart of the crisis that has become known as “post-truth” politics.

In the second half of the 17th century, in the aftermath of prolonged and bloody conflicts, European rulers adopted an entirely new perspective on the task of government, focused upon demographic trends – an approach made possible by the birth of modern statistics.

Since ancient times, censuses had been used to track population size, but these were costly and laborious to carry out and focused on citizens who were considered politically important (property-owning men), rather than society as a whole.

The emergence in the late 17th century of government advisers claiming scientific authority, rather than political or military acumen, represents the origins of the “expert” culture now so reviled by populists.

Blindness to local cultural variability is precisely what makes statistics vulgar and potentially offensive There was initially only one client for this type of expertise, and the clue is in the word “statistics”.

Casting an eye over national populations, states became focused upon a range of quantities: births, deaths, baptisms, marriages, harvests, imports, exports, price fluctuations.

New techniques were developed to represent these indicators, which exploited both the vertical and horizontal dimensions of the page, laying out data in matrices and tables, just as merchants had done with the development of standardised book-keeping techniques in the late 15th century.

By simplifying diverse populations down to specific indicators, and displaying them in suitable tables, governments could circumvent the need to acquire broader detailed local and historical insight.

The fact that GDP only captures the value of paid work, thereby excluding the work traditionally done by women in the domestic sphere, has made it a target of feminist critique since the 1960s.

(This has the side-effect of making systemic racism in the labour market much harder to quantify.) Despite these criticisms, the aspiration to depict a society in its entirety, and to do so in an objective fashion, has meant that various progressive ideals have been attached to statistics.

The other part is about how powerful political ideals became invested in these techniques: ideals of “evidence-based policy”, rationality, progress and nationhood grounded in facts, rather than in romanticised stories.

Since the high-point of the Enlightenment in the late 18th century, liberals and republicans have invested great hope that national measurement frameworks could produce a more rational politics, organised around demonstrable improvements in social and economic life.

Uniformity of data collection, overseen by a centralised cadre of highly educated experts, was an integral part of the ideal of a centrally governed republic, which sought to establish a unified, egalitarian society.

From the Enlightenment onwards, statistics played an increasingly important role in the public sphere, informing debate in the media, providing social movements with evidence they could use.

GDP is an estimate of the sum total of a nation’s consumer spending, government spending, investments and trade balance (exports minus imports), which is represented in a single number.

This is fiendishly difficult to get right, and efforts to calculate this figure began, like so many mathematical techniques, as a matter of marginal, somewhat nerdish interest during the 1930s.

It was only elevated to a matter of national political urgency by the second world war, when governments needed to know whether the national population was producing enough to keep up the war effort.

In the decades that followed, this single indicator, though never without its critics, took on a hallowed political status, as the ultimate barometer of a government’s competence.

This new industry immediately became the object of public and political fascination, as the media reported on what this new science told us about what “women” or “Americans” or “manual labourers” thought about the world.

As indicators of health, prosperity, equality, opinion and quality of life have come to tell us who we are collectively and whether things are getting better or worse, politicians have leaned heavily on statistics to buttress their authority.

In talking of society as a whole, in seeking to govern the economy as a whole, both politicians and technocrats are believed to have “lost touch” with how it feels to be a single citizen in particular.

Speaking scientifically about the nation – for instance in terms of macroeconomics – is an insult to those who would prefer to rely on memory and narrative for their sense of nationhood, and are sick of being told that their “imagined community” does not exist.

For roughly 450 years, the great achievement of statisticians has been to reduce the complexity and fluidity of national populations into manageable, comprehensible facts and figures.

Yet in recent decades, the world has changed dramatically, thanks to the cultural politics that emerged in the 1960s and the reshaping of the global economy that began soon after.

In many cases it has made the location of economic activity far more important, exacerbating the inequality between successful locations (such as London or San Francisco) and less successful locations (such as north-east England or the US rust belt).

If you live in one of the towns in the Welsh valleys that was once dependent on steel manufacturing or mining for jobs, politicians talking of how “the economy” is “doing well” are likely to breed additional resentment.

So when politicians use national indicators to make their case, they implicitly assume some spirit of patriotic mutual sacrifice on the part of voters: you might be the loser on this occasion, but next time you might be the beneficiary.

The ECB is concerned with the inflation or unemployment rate across the eurozone as if it were a single homogeneous territory, at the same time as the economic fate of European citizens is splintering in different directions, depending on which region, city or neighbourhood they happen to live in.

But so long as politicians continue to deflect criticism by pointing to the unemployment rate, the experiences of those struggling to get enough work or to live on their wages go unrepresented in public debate.

It wouldn’t be all that surprising if these same people became suspicious of policy experts and the use of statistics in political debate, given the mismatch between what politicians say about the labour market and the lived reality.

In recent years, a new way of quantifying and visualising populations has emerged that potentially pushes statistics to the margins, ushering in a different era altogether.

This is a form of aggregation suitable to a more fluid political age, in which not everything can be reliably referred back to some Enlightenment ideal of the nation state as guardian of the public interest.

We live in an age in which our feelings, identities and affiliations can be tracked and analysed with unprecedented speed and sensitivity – but there is nothing that anchors this new capacity in the public interest or public debate.

In 2014, when Facebook researchers published results of a study of “emotional contagion” that they had carried out on their users – in which they altered news feeds to see how it affected the content that users then shared in response – there was an outcry that people were being unwittingly experimented on.

Such politicians rely on a new, less visible elite, who seek out patterns from vast data banks, but rarely make any public pronouncements, let alone publish any evidence.

During the presidential election campaign, Cambridge Analytica drew on various data sources to develop psychological profiles of millions of Americans, which it then used to help Trump target voters with tailored messaging.

As techniques of “sentiment analysis”, which detect the mood of large numbers of people by tracking indicators such as word usage on social media, become incorporated into political campaigns, the emotional allure of figures such as Trump will become amenable to scientific scrutiny.

Where statistics can be used to correct faulty claims about the economy or society or population, in an age of data analytics there are few mechanisms to prevent people from giving way to their instinctive reactions or emotional prejudices.

In this new technical and political climate, it will fall to the new digital elite to identify the facts, projections and truth amid the rushing stream of data that results.

Whether indicators such as GDP and unemployment continue to carry political clout remains to be seen, but if they don’t, it won’t necessarily herald the end of experts, less still the end of truth.

Just as “sharing economy” platforms such as Uber and Airbnb have recently been thwarted by legal rulings (Uber being compelled to recognise drivers as employees, Airbnb being banned altogether by some municipal authorities), privacy and human rights law represents a potential obstacle to the extension of data analytics.

How Do We Align Artificial Intelligence with Human Values?

As I began considering contributing I found myself contemplating whether or not I, a business professional with what I would consider only a cursory (though increasing) understanding of artificial intelligence, would be able to provide a meaningful or useful contribution to this dialogue.

It was quite vexing to discover that instead of having answers to the questions that were asked, I instead ended up raising more and more questions, many of which cannot be answered without further definition of, discussion of, and/or real life examples of the application of, the AI Principles.

For these reasons, I’d like to make a suggestion that could achieve not only the stated goal of receiving public comment on the AI Principles themselves, but could serve to educate the general populace about AI, increase the number of individuals who are willing to engage in this dialogue, and which could potentially help define the implementation of the Asilomar AI Principles.

inviting public comment on the companion document, with the intention of the creation of an end product that would include the following (with notations for sections that would only be included during the ‘creation/public comment’ stage of this document development, which would help to shape the final product): • A

a result, management at Black Mesa instruct one of their Computer Engineers to create an AI program that will allow the company to access every computer located within any Black Mesa-owned facility, no matter the no matter how remote, scan them for suspicious activity, and search the contents for ‘key words/phrases’ that have been identified to indicate illegal &/or subversive activity by a current employee/insider threat. o The

prior to the completion of the AI program, the Computer Engineer accidentally discovers that another engineer will be modifying the AI program so that it can search any computer, owned by any facility or contained on any network, as Black Mesa intends to sell the AI program to a tyrannical island dictatorship, which intends to utilize the program to conduct searches of citizens’ computers for the express purpose of “identifying subversives”.  The

This AI system is required to have the ability to, among other things, ascertain when the occupants of the compound are in imminent danger of harm by an outside force but are incapacitated (such as following an initial chemical attack that renders the occupants unconscious ), to assume command and control of the compound’s array of security and defensive systems, &

accounts payable intern who, during a live, on-site usage test of the not-yet-armed AI system, is erroneously identified by the AI system as an ‘enemy combatant which must be neutralized’, followed by the displaying of code indicating the AI system’s chosen actions would have been to activate the facility security system that is designed to deliver a low voltage electric ‘warning shock’ to would-be intruders when they touched any part of the all-metal entry door –

AI system development team has not been able to identify why the intern was identified at the elevated level of an ‘enemy combatant’, but they are under extreme pressure from Aperture’s leadership to provide a prototype of the AI system to the agency that funded the project, with the agency intending to immediately begin live testing of the prototype.

(For example, industry – wide, governmental, through treaties/international agreements, etc.) This draft companion document would serve a number of purposes, and facilitate the achievement of a number of implied and/or stated goals of the Future of Life Institute, as well as of the AI professional community as a whole, including: • Assist

a working, referenceable document for professionals entering the field &/or working in the field of AI throughout the globe (including professionals who made be unable to attend conferences, or even communicate with the larger community of AI professionals) • Provide

And while I am now cognizant of the fact that, technically speaking, Dragon is categorized as a “non-sentient artificial intelligence that is focused on one narrow task” (utilization of computers completely by voice), there have been many, many times when the program has behaved in ways that have caused both myself and those around me to sincerely question whether or not the Dragon program has somehow attained sentience –

Priests - JJ [OFFICIAL VIDEO]

From the album "Nothing Feels Natural" , out January 27 2017 on Sister Polygon Records. sisterpolygonrecords.bigcartel.com Directed + edited by Katie Alice ...

Modeling the Melt: What Math Tells Us About the Disappearing Polar Ice Caps

Kenneth M. Golden is a Distinguished Professor of Mathematics and Adjunct Professor of Bioengineering at the University of Utah. His scientific interests lie in ...

ZEITGEIST: MOVING FORWARD | OFFICIAL RELEASE | 2011

Please support Peter Joseph's new, upcoming film project: "InterReflections" by joining the mailing list and helping: LIKE ..

CS50 2016 Week 0 at Yale (pre-release)

twitch.tv/stereotonetim 2017-06-16 [23:39]