June 26, 2023 By Holly Vatter 3 min read

Imagine the possibilities of providing text-based queries and opening a world of knowledge for improved learning and productivity. Possibilities are growing that include assisting in writing articles, essays or emails; accessing summarized research; generating and brainstorming ideas; dynamic search with personalized recommendations for retail and travel; and explaining complicated topics for education and training. With generative AI, search becomes dramatically different. Instead of providing links to multiple articles, the user will receive direct answers synthesized from a myriad of data. It’s like having a conversation with a very smart machine.

What is generative AI?

Generative AI uses an advanced form of machine learning algorithms that takes users prompts and uses natural language processing (NLP) to generate answers to almost any question asked. It uses vast amounts of internet data, large-scale pre-training and reinforced learning to enable surprisingly human like user transactions. Reinforcement learning from human feedback (RLHF) is used, adapting to different contexts and situations, becoming more accurate and natural overtime. Generative AI is being analyzed for a variety of use cases including marketing, customer service, retail and education.

ChatGPT was the first but today there are many competitors

ChatGPT uses a deep learning architecture call the Transformer and represents a significant advancement in the field of NLP. While OpenAI has taken the lead, the competition is growing. According to Precedence Research, the global generative AI market size valued at USD 10.79 in 2022 and it is expected to be hit around USD 118.06 by 2032 with a 27.02% CAGR between 2023 and 2032. This is all very impressive, but not without caveats.

Generative AI and risky business

There are some fundamental issues when using off-the-shelf, pre-built generative models. Each organization must balance opportunities for value creation with the risks involved. Depending on the business and the use case, if tolerance for risk is low, organizations will find that either building in house or working with a trusted partner will yield better results.

Concerns to consider with off the shelf generative AI models include:   

Internet data is not always fair and accurate

At the heart of much of generative AI today is vast amounts of data from sources such as Wikipedia, websites, articles, image or audio files, etc. Generative models match patterns in the underlying data to create content and without controls there can be malicious intent to advance disinformation, bias and online harassment. Because this technology is so new there is sometimes a lack of accountability, increased exposure to reputational and regulatory risk pertaining to things like copyrights and royalties. 

There can be a disconnect between model developers and all model use cases

Downstream developers of generative models may not see the full extent of how the model will be used and adapted for other purposes. This can result in faulty assumptions and outcomes which are not crucial when errors involve less important decisions like selecting a product or a service, but important when affecting a business-critical decision that may open the organization to accusation of unethical behavior including bias, or regulatory compliance issues that can lead to audits or fines.  

Litigation and regulation impacts use

Concern over litigation and regulations will initially limit how large organizations use generative AI. This is especially true in highly regulated industries such as financial services and healthcare where the tolerance is very low for unethical, biased decisions based on incomplete or inaccurate data and models can have detrimental repercussions.

Eventually, the regulatory landscape for generative models will catch up but companies will need to be proactive in adhering to them to avoid compliance violations, harm to their company’s reputation, audits and fines. 

What can you do now to scale generative AI responsibly?

As the outcomes of AI insights become more business-critical and technology choices continue to grow, you need assurance that your models are operating responsibly with transparent process and explainable results. Organizations that proactively infuse governance into their AI initiatives can better detect and mitigate model risk while strengthening their ability to meet ethical principles and government regulations.

Of utmost importance is to align with trusted technologies and enterprise capabilities. You can start by learning more about the advances IBM is making in new generative AI models with watsonx.ai and proactively put watsonx.governance in place to drive responsible, transparent and explainable AI workflows, today and for the future.

What is watsonx.governance?   

watsonx.governance provides a powerful governance, risk and compliance (GRC) tool kit built to operationalize AI lifecycle workflows, proactively detect and mitigate risk, and to improve compliance with the growing and changing legal, ethical and regulatory requirements. Customizable reports, dashboards and collaborative tools connect distributed teams, improving stakeholder efficiency, productivity and accountability. Automatic capture of model metadata and facts provide audit support while driving transparent and explainable model outcomes. 

Accelerate governance and simplify risk management across your entire organization with IBM OpenPages, a unified governance, risk and compliance (GRC) solution to help manage, monitor and report on risk and compliance. Learn more about how watsonx.governance is driving responsible, transparent and explainable AI workflows and the enhancements coming in the future.

Sign up for the watsonx.governance waitlist
Was this article helpful?
YesNo

More from Artificial intelligence

AI transforms the IT support experience

5 min read - We know that understanding clients’ technical issues is paramount for delivering effective support service. Enterprises demand prompt and accurate solutions to their technical issues, requiring support teams to possess deep technical knowledge and communicate action plans clearly. Product-embedded or online support tools, such as virtual assistants, can drive more informed and efficient support interactions with client self-service. About 85% of execs say generative AI will be interacting directly with customers in the next two years. Those who implement self-service search…

Bigger isn’t always better: How hybrid AI pattern enables smaller language models

5 min read - As large language models (LLMs) have entered the common vernacular, people have discovered how to use apps that access them. Modern AI tools can generate, create, summarize, translate, classify and even converse. Tools in the generative AI domain allow us to generate responses to prompts after learning from existing artifacts. One area that has not seen much innovation is at the far edge and on constrained devices. We see some versions of AI apps running locally on mobile devices with…

Chat with watsonx models

3 min read - IBM is excited to offer a 30-day demo, in which you can chat with a solo model to experience working with generative AI in the IBM® watsonx.ai™ studio.   In the watsonx.ai demo, you can access some of our most popular AI models, ask them questions and see how they respond. This gives users a taste of some of the capabilities of large language models (LLMs). AI developers may also use this interface as an introduction to building more advanced…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters