AI News, Regulatory governance and independence

Australia's digital minister avoiding legislating AI ethics and will remain voluntary framework

Australia's Minister for Superannuation, Financial Services and the Digital Economy Jane Hume has assured the country's AI ethics framework will remain voluntary for the foreseeable future.

The problems themselves haven't really changed, so our regulations certainly have to be flexible enough to accommodate technology changes …  we want to make sure that there's nothing in regulations and legislation that prevents the advancement of technology.

For Microsoft, it voluntarily adopted the AI framework by developing an internal governance structure 'to enable progress and accountability rules to standardise responsible AI requirements, training and practices to help our employees act on our principles, and to think deeply about the impacts of AI systems', Microsoft Australia corporate affairs director Belinda Dennett said during the event.

Earlier this month, CBA revealed that testing the ethics principles during the creation and design of its Commbank app feature Bill Sense gave insight into how the bank could apply responsible AI at scale.

Things like the safe management of data, customer privacy, and transparency have been central to the way we operate since long before the advent of AI,' CBA chief decision scientist Dan Jermyn told ZDNet.

'But the pace, scale, and sophistication of AI solutions mean we need to ensure we are constantly evolving to meet the demands of new technology, which is why collaboration with our partners across government and industry is so important.'

In a bid to ensure AI is being applied responsibly, the bank has developed a tool to make it easier for teams across the bank to deliver AI safely to scale, according to Jermyn.

The EU's Artificial Intelligence Act Could Become A Brake On Innovation

The EU proposal to regulate AI will be a brake on innovation and a a challenge not to be underestimated for promising start-ups that are using artificial intelligence.According to a report of the Washington-based think tank Center for Data Innovation, a

forth new rules on the use of artificial intelligence in the EU.The realization of AI projects will become significantly more difficult with the new law and leaving developing their business further outside the EU will almost certainly be likely for ambitious

The regulation framework proposed in the White Paper is based on the idea that development and use of artificial intelligence entails high risks for fundamental rights, consumer rights and safety.

Key features include training, data and record keeping requirements, providing information, technology accuracy and robustness, human supervision and specific requirements for certain AI applications such as the use of biometric remote recognition.

European officials also want to restrict the police use of facial recognition and to ban the use of certain types of AI systems - one of the broader efforts to regulate high-risk applications of artificial intelligence.

The regulatory and policy developments in the first quarter of 2021 reflect a global turning point for serious regulation of artificial intelligence in the

Meanwhile, domestic AI policy is continuing to take shape in the United States, but it is largely focused on ensuring international competitiveness and strengthening national security capabilities.

According to a draft of future EU rules obtained by Politico, the EU will ban certain applications of high-risk artificial intelligence systems and will prohibit others from entering the bloc

For example, fifty US states, such as New York, require autonomous vehicle manufacturers to conduct road tests under the paid supervision of the police, but testing such vehicles is expensive.

Respondents attach great importance to the EU's role in shaping a coherent strategic vision for technology policy, with 70% describing it as "very important"

areas the role of members states has been rated worse than that of the EU, showing recognition of the desire and need for multi-level coordination between the EU and individual member states, as well as the role of each of them.

privacy regulations should have changed the Internet for the better, but so far it has mostly frustrated users, businesses, and regulators.So it stands to reason and we are well advised to prepare ourselves for an AI act full of challenges.