Menu Close

Can Tech Companies Be Trusted With AI Governance?

This is a syndicated repost published with the permission of Statista | Infographics. To view original, click here. Opinions herein are not those of the Wall Street Examiner or Lee Adler. Reposting does not imply endorsement. The information presented is for educational or entertainment purposes and is not individual investment advice.

Public-facing AI tools, including text-based applications like ChatGPT or text-to-image models like Stable Diffusion, Midjourney or DALL-E 2, have quickly turned into the newest digital frontier in terms of regulatory, legal and online privacy issues. Already malicious actors are committing criminal offenses and spreading mis- and disinformation aided by the capabilities of generative AI, with national governments struggling to keep up the pace and companies shifting the blame on individual users. As a survey conducted by KPMG Australia and the University of Queensland shows, the general public already doesn’t trust government institutions to oversee the implementation of AI.

Surveying over 17,000 people across 17 countries, the study found that only one third of respondents had high or complete confidence in governments regarding the regulation and governance of AI tools and systems. Survey participants were similarly skeptical of tech companies and existing regulatory agencies as governing bodies in AI. Instead, research institutions, universities and defense forces are seen as most capable in this regard.

Although the people surveyed showed skepticism of state governments, supranational bodies like the United Nations were thought of more positively. The European Commission is currently the only organ in this category to have drafted a law aiming to curb the influence of AI and ensure the protection of individuals’ rights. The so-called AI Act was proposed in April 2021 and has yet to be adopted. The proposed bill sorts AI applications into different risk categories. For example, AI aimed at manipulating public opinion or profiting off children or vulnerable groups would become illegal in the EU. High-risk applications, like biometric data software, would be subject to strict legal boundaries. Experts have criticized the policy draft for its apparent loopholes and vague definitions.

In the U.S., President Joe Biden unveiled a blueprint called the AI Bill of Rights in October 2022. The document outlines five guiding principles for the development and implementation of AI. Despite its name, the AI Bill of Rights is non-regulatory and non-binding. At the time of writing, there are no federal laws or binding specific policies on AI regulation in the United States.

This chart shows the share of respondents most confident in the following institutions to regulate or govern artificial intelligence.

Confidence in institutions to regulate/govern artificial intelligence

Join the conversation and have a little fun at Capitalstool.com. If you are a new visitor to the Stool, please register and join in! To post your observations and charts, and snide, but good-natured, comments, click here to register. Be sure to respond to the confirmation email which is sent instantly. If not in your inbox, check your spam filter.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

RSS
Follow by Email
LinkedIn
Share

Discover more from The Wall Street Examiner

Subscribe now to keep reading and get access to the full archive.

Continue reading