Combatting the Risks of Generative AI

by Anna Munhin Mar 23, 2023 News
Combatting the Risks of Generative AI

There is a competitive race going on. The VP of machine learning at Hyperscience says it's important to develop and evaluate these powerful tools with a clear ethical framework that lays out rules and regulations.

Bard made an incredibly costly mistake in its first public demo. When Bard responded to a user prompt with an incorrect claim, the market value of the company dropped by $100 billion. Since Bard's hiccup, Meta announced its competitive solution, LLa Ma, and retailers have all shared plans to incorporate it into their platforms.

Artificial intelligence solutions are here to stay and will affect the consumer experience. As companies try to integrate equally as quickly, ethical considerations must be at the forefront.

The Changing Role of Search Engines

Search engines are classified as information Aggregators because they gather intel from another party. The role of search engines will change with the use of large language models in delivering search results as they become better at hallucinating content following user prompts.

Microsoft and Google are more responsible for ethical concerns because they become information generators. Under Section 230 Opens a new window of the Communications Decency Act, search engines can be sued for libel, but the content creator can be held liable. Bard could eventually be held accountable for the potentially libelous response.

Understanding the implications of ethically compromised content will be an important area to watch. The social media backlash was swift when a peer-to-peer mental health service used Artificial Intelligence. It is very risky to use a generative tool in healthcare, and companies in the space must consider the public response before taking this step.

Those that regulate themselves internally will be better off than those that don't.

How to Implement an Ethical Framework

Internal education and buy-in are required for strong ethical frameworks. Every employee needs to be aware of the pitfalls of the machine learning technology. An artificial intelligence ethics committee should be created to focus on education and engagement. The committees give a system of checks and balances to technological development and help organizations align on how regulators can protect the public.

There are a number of areas to consider in order to create a successful ethics team.

  1. Put transparency first: To start, lay out clear goals and objectives for your committee. Stakeholders should align on the end goals and not limit these conversations solely to the committee itself. Employees and other technical leaders across the organization may have something to say about the committee’s direction, and listening to all voices is important. With every decision and milestone, transparent communication will accelerate trust and buy-in from the organization and should not be understated. 
  2. Avoid over-committing: Artificial intelligence is an incredibly complex field with many intricacies yet to be explored. That’s why narrowing your committee’s scope and remaining focused is critical. If you try to tackle everything under the sun, things will inevitably fall flat. Understand how your company plans to deploy, build or leverage technology, and use this knowledge to be intentional in your committee’s plans to drive the most impact.
  3. Embrace diverse perspectives: Those experienced with AI and deep tech offer the most technical expertise, but a well-rounded committee embraces perspectives and stakeholders from across the entire business. Team members from legal, creative, marketing and engineers, to name a few, should all be present, giving your committee representation in all areas where concerns may arise. Once the committee is underway, engage in company-wide conversations to bring everyone into the fold. 

The most influential ethics committees will engage with people and teams outside the organization to keep up with industry conversations, challenges and solutions. Teams could work with regulators to create rules that would protect individuals from the negative effects of artificial intelligence. Customer feedback is welcomed to understand ethical questions faced by their teams.

Is synthetic data set going to become a mainstream strategy for artificial intelligence?

Launching Ethical Principles into Practice

The White House made a proposal to help developers and organizations better navigate ethical artificial intelligence. The plan gives insight into potential future federal-level regulation, the looming requirements of public sector customers, and, perhaps most importantly, common language to spark internal discussion.

The Bill of Rights for Artificial Intelligence is a step in the right direction for establishing ethical uses of the technology. The Bill of Rights for Artificial Intelligence will increase awareness of the potential negative impacts of the technology for everyday people.

While we navigate the early stages of regulations, it is important for companies to take preventative steps to avoid unethical applications of artificial intelligence. The primary areas outlined by the Bill of Rights should be assessed as a'sniff test' to determine whether emerging generative artificial intelligence use cases are ethical.

The key areas to evaluate are outlined.

  1. Safe and effective systems: You should be protected from unsafe or ineffective systems. 
  2. Algorithmic discrimination protections: You should not face discrimination by algorithms, and systems should be used and designed in an equitable way.
  3. Data privacy: You should be protected from abusive data practices via built-in protections, and you should have agency over how data about you is used.
  4. Notice and explanation: You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you. 
  5. Human alternatives, consideration, and fallback: You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.

Artificial intelligence regulation needs a delicate balance of continuously fostering advancement while carefully considering its widespread use. Artificial intelligence could be infinite if it is designed, implemented, and applied right.

The risk level for companies that use these tools is going up as they become more popular. Companies that lead with an ethical framework are better suited to manage ethical concerns, while companies that rush their offerings to market can erode consumer confidence for years.

In the next two years, will there be a resolution to the ethical debate about generative artificial intelligence? Share with us on Facebook Opens a new window,Twitter Opens a new window, andLinkedIn Opens a new window We would be happy to hear from you.