Broadway

Complete News World

Software companies are pushing for stricter regulation

Software companies are pushing for stricter regulation

Washington. The world’s leading software companies are calling for stronger legal regulation of artificial intelligence (AI). This is the only way to effectively eliminate legal uncertainties for companies, Kate Goodloe, executive director of lobby group BSA The Software Alliance, said in an interview with international journalists in Washington.

Read more after the announcement

Read more after the announcement

The association represents the interests of B2B software providers, that is, companies whose software solutions are not primarily aimed at end users, but at other companies. BSA members include Microsoft, which is one of the largest investors in ChatGPT developer Open AI, and Adobe, which is also increasingly relying on AI in products like Photoshop.

“Artificial Intelligence is now being used in all sectors of the economy.”

“AI is now being used in every sector of the economy. For example, our member companies are developing AI solutions used by car manufacturers or healthcare providers. “But there are legitimate concerns about how these technologies are used.”

Read more after the announcement

Read more after the announcement

“The use of artificial intelligence can make selecting job candidates or making credit decisions more fair and efficient,” said Aaron Cooper, BSA’s vice president of global policy. “But we recognize that there are also risks associated with this. If the system is not trained with data that is representative of the community, it could perpetuate past discrimination.”

The European Parliament passed the “Artificial Intelligence Law”.

“When an AI system makes life-changing decisions, it is always a high-risk application,” Kate Goodloe added. “In such cases, companies should be required to use risk management and conduct impact assessments.”

In June, the EU Parliament passed a European Artificial Intelligence Act (AI Act), which is currently being coordinated in a tripartite with the EU Commission and member states. An agreement is expected to be reached by the end of this year. The AI ​​Act provides for a risk-based approach and provides for different levels of regulation for different levels of risk – including a complete ban on the use of AI applications that pose unacceptably high risks.

Read more after the announcement

Read more after the announcement

The BSA sees it as a trade barrier

“We think this is the right approach,” Aaron Cooper said. But what is especially important is to regulate AI as consistently internationally as possible. So far things are looking good, and the US, Singapore and Japan are moving in a similar direction. However, the United States in particular is lagging behind – in terms of legislation, not in the development of new AI technologies. The adoption of specific AI regulations at the federal level is not on the horizon in the near future.

The BSA sees the lack of such laws as a business disadvantage for its member companies. “Companies relying on AI solutions need to ensure that the products are safe, reliable and will not cause them regulatory problems. Therefore, there is a need for strong laws that provide companies with a framework to develop these AI solutions.