AI News, Artificial Intelligence artificial intelligence

Is Artificial Intelligence Reading Your License Plate?

What if police officers had the power to instantly scan and process license plate information from every car on the road -- including yours?

Axon recently announced its Fleet 3 in-car video system, which includes a technology called automated license plate recognition.

Police cars equipped with artificial intelligence cameras will automatically 'look' at numerous license plates and find the ones needing attention.

This announcement comes a full year before the Fleet 3 system is rolled out, allowing time to address potential privacy and ethical concerns.

Axon in-car video is already capable of making license plates visible at a distance of 30 feet with its Fleet 2 product.

Earlier this year, Axon elected to keep facial recognition technology off of its police body cameras for fear that it would disproportionately target minorities.

Of Axon's identified $8.4 billion total addressable market, the lion's share ($5.8 billion) comes from cloud-based software solutions.

Going back to license plates, AI license plate technology is a way to entice law enforcement offices to purchase the Fleet 3 system.

And since Fleet 3 in-car video uploads data automatically to the cloud, it drives sales growth to Axon's software and sensor segment of the business.

I think most of us are for safer communities resulting from more effective law enforcement but also against the invasion of privacy of law-abiding citizens and discrimination against a person based on race or social status.

Predicting what molecules to make next and how to make them

Our scientists are using AI to help redefine medical science in the quest for new and better ways to discover, test and accelerate the potential medicines of tomorrow.

The following sections tell just some of the stories behind how data science and AI are starting to make a difference to our R&D efforts.

AI is making literary leaps – now we need the rules to catch up

Last February, OpenAI, an artificial intelligence research group based in San Francisco, announced that it has been training an AI language model called GPT-2, and that it now “generates coherent paragraphs of text, achieves state-of-the-art performance on many language-modelling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarisation – all without task-specific training”.

As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper.” Given that OpenAI describes itself as a research institute dedicated to “discovering and enacting the path to safe artificial general intelligence”, this cautious approach to releasing a potentially powerful and disruptive tool into the wild seemed appropriate.

After all, without full disclosure – of program code, training dataset, neural network weights, etc – how could independent researchers decide whether the claims made by OpenAI about its system were valid?

The replicability of experiments is a cornerstone of scientific method, so the fact that some academic fields may be experiencing a “replication crisis” (a large number of studies that prove difficult or impossible to reproduce) is worrying.

On the other hand, the world is now suffering the consequences of tech companies like Facebook, Google, Twitter, LinkedIn, Uber and co designing algorithms for increasing “user engagement” and releasing them on an unsuspecting world with apparently no thought of their unintended consequences.

If the row over GPT-2 has had one useful outcome, it is a growing realisation that the AI research community needs to come up with an agreed set of norms about what constitutes responsible publication (and therefore release).

In a fascinating essay, I, Language Robot, the neuroscientist and writer Patrick House reports on his experience of working alongside OpenAI’s language model – which produces style-matched prose to any written prompt that it’s fed.

Artificial Intelligence (AI) Health Outcomes Challenge

Launch Stage: CMS announced 25 Participants to advance to Stage 1 on October 30, 2019.

The 25 Participants, titles of proposed solutions and geographic locations are listed below: Participant: Accenture Federal Services Proposed Solution: Accenture Federal Services AI ChallengeGeographic Location: Arlington, Virginia Participant: Ann Arbor Algorithms Inc.

Proposed Solution: Actionable AI to Prevent Unplanned Admissions and Adverse EventsGeographic Location: Kenilworth, New Jersey Participant: North Carolina State University (NCSU) Proposed Solution: Multi-Layered Feature Selection and Dynamic Personalized ScoringGeographic Location: Raleigh, North Carolina Participant: Northrop Grumman Systems Corporation (NGSC) Proposed Solution: Reducing Patient Risk through Actionable Artificial Intelligence: AI Risk Avoidance System (ARAS)Geographic Location: Herndon, Virginia Participant: Northwestern Medicine Proposed Solution: A human-machine solution to enhance delivery of relationship-oriented careGeographic Location: Chicago, Illinois Participant: Observational Health Data Sciences and Informatics (OHDSI) Proposed Solution: OHDSI SubmissionGeographic Location: New York, New York Participant: University of Virginia Health System Proposed Solution: Actionable AIGeographic Location: Charlottesville, Virginia More information about Stage 1 submission requirements and evaluation criteria will be provided at a later date.