Global Firms Commit to Ethical AI

The Seoul Declaration called for promoting safe, innovative and inclusive AI to address challenges and opportunities associated with the fast-evolving technology…reports Asian Lite News

A group of South Korean and global tech companies adopted a joint pledge on Wednesday, vowing to responsibly develop and use artificial intelligence (AI) and to address social challenges with the technology.

The ‘Seoul AI Business Pledge’ was announced by 14 companies, including Korea’s Samsung Electronics, Naver, Kakao and KT Corp, as well as global big tech companies, such as Google, OpenAI and IBM, during the opening ceremony of the AI Global Forum in Seoul.

The AI Global Forum is part of the two-day AI Seoul Summit co-hosted by South Korea and Britain as a follow-up to last year’s inaugural global AI safety summit, where the first global guidelines on AI safety were adopted, reports Yonhap news agency.

“We commit to upholding the three strategic priorities, through our efforts including advancing AI safety research, identifying best practices, collaborating across sectors, and helping AI meet society’s greatest challenges,” the pledge read.

The three priorities are: ensuring responsible development and use of AI, pursuing sustainable development and innovation in AI, and ensuring the equitable benefits of AI for all.

In the pledge, the companies acknowledge the rapid acceleration of technological advancements in AI and their growing impact on the global community. They vowed to work to ensure responsible AI development in line with the Seoul Declaration adopted by the AI Seoul Summit the previous day.

The Seoul Declaration called for promoting safe, innovative and inclusive AI to address challenges and opportunities associated with the fast-evolving technology.

The first day of the AI Global Forum saw representatives from 19 countries, including the United States, Japan, Germany, France and Italy, attending the ministers’ session to discuss actions to strengthen AI safety.

Meanwhile, U ministers on Tuesday have unanimously given their final approval to the Artificial Intelligence Act, a major new law regulating the use of transformative technology in “high-risk” situations, such as law enforcement and employment.

The European Union hopes that laying down strict AI rules relatively early in the technology’s development will address the dangers in time and help shape the international agenda for regulating AI.

Systems intended for use in “high-risk” situations, listed in the law’s annexes, will have to meet various standards spanning transparency, accuracy, cybersecurity and quality of training data, among other things. Some uses, such as Chinese-style social credit scoring, will be banned outright.

High-risk systems will have to obtain certification from approved bodies before they can be put on the EU market. A new “AI Office” will oversee enforcement at the EU level.

There are also more basic rules for “general purpose” systems that may be used in various situations – some high-risk, others not. For example, providers of such systems will have to keep certain technical documents for audit.

However, providers of especially powerful general-purpose AI systems will have to notify the European Commission if the system possesses certain technical capabilities.

Unless the provider can prove that their system poses no serious risk, the commission could designate it as a “general-purpose AI model with systemic risk,” after which stricter risk-mitigation rules would apply.

Meanwhile, AI-generated content such as images, sound or text would have to be marked to protect against misleading deepfakes.

The European Commission proposed the first draft of the AI Act in April 2021, having published a “white paper” outlining its plan for a risk-based approach in February 2020.

Industry officials, including Jason Kwon, chief strategy officer of OpenAI, and Tom Lue, vice president of Google DeepMind, joined roundtable discussions.

GenAI Tops Q1 Business Agendas

Generative artificial intelligence (GenAI) has emerged as a key theme in companies’ discussions in the first quarter (Q1) of this year, a new report said on Monday.

According to the data analytics and consulting company GlobalData, S&P 500 companies in their Q1 2024 earnings call transcripts had discussions about GenAI adoption, products/solutions on offer, strategic partnerships, investments, and application areas.

“Companies are looking at GenAI tools for better productivity, increased sales, brand awareness, and an enhanced customer experience. They are investing, collaborating, and leveraging to make use of this new and emerging opportunity,” said Misa Singh, Business Fundamentals Analyst at GlobalData.

According to the report, companies are applying GenAI to help customers and increase productivity.

Biotech firm Thermo Fisher Scientific is leveraging GenAI as part of its PPI (Practical Process Improvement) business system toolkit to help customers, on the other hand, management services company Automatic Data Processing is using GenAI to proactively deliver actionable insights in plain language to enhance HR productivity, aid decision-making, and streamline day-to-day tasks for clients and their employees, the report mentioned.

Moreover, the report showed that companies are also teaming up to improve their AI capabilities.

Cognizant Technology Solutions is collaborating with ServiceNow to enhance its Work NEXT modern workplace services solution with GenAI capabilities.

Cognizant also discussed its plan to invest approximately $1 billion in GenAI capabilities over the next three years.

ALSO READ: Indian single malts make ‘sipping success’, outsell global brands

Tagged:

Leave a Reply

Your email address will not be published. Required fields are marked *