Senior Writer

AI push or pause: CIOs speak out on the best path forward

Feature
May 10, 202311 mins
Artificial IntelligenceGenerative AIIT Leadership

Recent advances have highlighted AI’s incomparable potential and not yet fully fathomed risks, placing CIOs in the hot seat for figuring out how best to leverage this increasingly controversial technology in business. Here’s what they have to say about it.

Business man making a presentation at office. Business executive delivering a presentation to his colleagues during meeting or in-house workshop. Rear view. Business and entrepreneurship.
Credit: Matej Kastelic / Shutterstock

With the AI hype cycle and subsequent backlash both in full swing, IT leaders find themselves at a tenuous inflection point regarding use of artificial intelligence in the enterprise.

Following stern warnings from Elon Musk and revered AI pioneer Geoffrey Hinton, who recently left Google and is broadcasting AI’s risks and a call to pause, IT leaders are reaching out to institutions, consulting firms, and attorneys across the globe to get advice on the path forward. 

“The recent cautionary remarks of tech CEOs such as Elon Musk about the potential dangers of artificial intelligence demonstrate that we are not doing enough to mitigate the consequences of our innovation,” says Atti Riazi, SVP and CIO of Hearst. “It is our duty as innovators to innovate responsibly and to understand the implications of technology on human life, society, and culture.”

That sentiment is echoed by many IT leaders, who believe innovation in a free market society is inevitable and should be encouraged, especially in this era of digital transformation — but only with the right rules and regulations in place to prevent corporate catastrophe or worse.

“I agree a pause may be appropriate for some industries or certain high-stake use cases but in many other situations we should be pushing ahead and exploring at speed what opportunities these tools provide,” says Bob McCowan, CIO at Regeneron Pharmaceuticals.

“Many board members are questioning if these technologies should be adopted or are they going to create too many risks?” McCowan adds. “I see it as both. Ignore it or shut it down and you will be missing out on significant opportunity, but giving unfettered access [to employees] without controls in place could also put your organization at risk.”

While AI tools have been in use for years, the recent release of ChatGPT to the masses has stirred up considerably more controversy, giving many CIOs — and their boards — pause on how to proceed. Some CIOs take the risks to industry — and humanity — very seriously.

“Every day, I worry about this more,” says Steve Randich, CIO of The Financial Industry Regulatory Authority (FINRA), a key regulatory agency that reports to the SEC.

Randich notes a graph he saw recently that states that the ‘mental’ capacity of an AI program just exceeded that of a mouse and in 10 years will exceed the capacity of all of humankind. “Consider me concerned, especially if the AI programs can be influenced by bad actors and are able to hack, such as at nuclear codes,” he says.

George Westerman, a senior lecturer at MIT Sloan School of Management, says executives at enterprises across the globe are reaching out for advice from MIT Sloan and other institutions about the ethics, risks, and potential liabilities of using generative AI. Still, Westerman believes most CIOs have already engaged with their top executives and board of directors and that generative AI itself imposes no new legal liabilities that corporations and their executives don’t abide today.

“I would expect that just like all other officers of companies that there’s [legal] coverage there for your official duties,” Westerman says of CIOs’ personal legal exposure to AI fallout, noting the exception of using the technology inappropriately for personal gain.

Playing catchup on generative AI

Meanwhile, the release of ChatGPT has rattled regulatory oversight efforts. The EU had planned to enact its AI Act last month but opted to stall after ChatGPT was released given that many were concerned the policies would be outdated before going into effect. And as the European Commission and its related governing bodies work to sort out the implications of generative AI, company executives in Europe and the US are taking the warning bells seriously.

“As AI becomes a key part of our landscape and narrow AI turns into general AI — who becomes liable? The heads of technology, the inanimate machine models? The human interveners ratifying/changing training models? The technology is moving fast, but the controls and ethics around it are not,” says Adriana Karaboutis, group chief information and digital officer at National Grid, which is based in the UK but operates in the northeast US as well.

“There is a catchup game here. To this end and in the meantime managing AI in the enterprise lies with CxOs that oversee corporate and organizational risk. CTO/CIO/CTO/CDO/CISOs are no longer the owners of information risk” given the rise of AI, the CIDO maintains. “IT relies on the CEO and all CxOs, which means corporate culture and awareness to the huge benefits of AI as well as the risks must be owned.”

Stockholm-based telecom Ericsson sees huge upside in generative AI and is investing in creating multiple generative AI models, including large language models, says Rickard Wieselfors, vice president and head of enterprise automation and AI at Ericsson.

“There is a sound self-criticism within the AI industry and we are taking responsible AI very seriously,” he says. “There are multiple questions without answer in terms of intellectual property rights to text or source code used in the training. Furthermore, data leakage in querying the models, bias, factual mistakes, lack of completeness, granularity or lack of model accuracy certainly limits what you can use the models for.

“With great capability comes great responsibility and we support and participate in the current spirit of self-criticism and philosophical reflections on what AI could bring to the world,” Wieselfors says.

Some CIOs, such as Choice Hotels’ Brian Kirkland, are monitoring the technology but do not think generative AI is fully ready for commercial use.

“I do believe it is important for industry to make sure that they are aware of the risk, reward, and impact of using generative AI technologies, like ChatGPT. There are risks to data ownership and generated content that must be understood and managed to avoid negative impacts to the company,” Kirkland says. “At the same time, there is a lot of upside and opportunity to consider. The upside will be significant when there is an ability to safely and securely merge a private data set with the public data in those systems.

“There is going to be a dramatic change in how AI and machine learning enable business value through everything from generated AI content to complex and meaningful business analytics and decision making,” the Choice Hotels CIO says.

No one is suggesting a total hold on such a powerful and life changing technology.

In a recent Gartner poll of more than 2,500 executives, 45% indicated that attention around ChatGPT has caused them to increase their AI investments. More than 70% maintain their enterprise is currently exploring generative AI and 19% have pilots or production use under way, with projects from companies such as Unilever and CarMax already showing promise.

At the MIT Sloan CIO conference starting May 15, Irving Wladawsky-Berger will host a panel on the potential risks and rewards of entering generative AI waters. Recently, he hosted a pre-conference discussion on the technology.

“We’re all excited about generative AI today,” said the former longtime IBM researcher and current affiliate researcher at MIT Sloan, citing major advances in genomics expected due to AI.

But Wladawsky-Berger noted that the due diligence required of those who adopt the technology will not be a simple task. “It just takes so much work,” he said. “[We must] figure out what works, what is safe, and what trials to do. That’s the part that takes time.”

Another CIO on the panel, Wafaa Mamilli, chief digital and technology officer at Zoetis, said generative AI is giving pharmaceutical companies increased confidence of curing chronic human illnesses.

“Because of the advances of generative AI technologies and computing power on genetic research, there are now trials in the US and outside of the US, Japan, and Europe that are targeting to cure diabetes,” she said.

Guardrails and guidelines: Generative AI essentials

Wall Street has more than taken notice of the industry’s swift embrace of generative AI. According to IDC, 2022 was a record-breaking year for investments in generative AI startups, with equity funding exceeding $2.6 billion.

“Whether it is content creation with Jasper.ai, image creation with Midjourney, or text processing with Azure OpenAI services, there is a generative AI foundation model to boost various aspects of your business,” according to one of several recent IDC reports on generative AI.

And CIOs already have the means of putting guardrails in place to securely move forward with generative AI pilots, Regeneron’s McCowan notes.

“It’s of critical importance that you have policy and guidelines to manage access and behaviors of those that plan to use the technologies and to remind your staff to protect intellectual property, PII [Personable Identifiable Information], as well as reiterating that what gets shared may become public,” McCowan says.

“Get your innovators and your lawyers together to find a risk-based model of using these tools and be clear what data you may expose, and what rights you have to the output from these solutions,” he says. “Start using the technologies with less risky use cases and learn from each iteration. Get started or you will lose out.”

Forrester Research analyst David Truog notes that AI leaders are right to put the warning label on generative AI before enterprises begin pilots and using generative AI in production. But he too is confident it can be done.   

“I don’t think stopping or pausing AI is the right path,” Truog says. “The more pragmatic and constructive path is to be judicious in selecting use cases where specialized AIs can help, embed thoughtful guardrails, and have an intentional air-gapping strategy. That would be a starting point.”

One DevOps IT chief at a consulting firm points to several ways CIOs may mitigate risk when using generative AI, including thinking like a venture capitalist; clearly understanding the technology’s value; determining ethical and legal considerations in advance of testing; experimenting, but not rushing into investments; and considering the implications from the customer point of view.

“Smart CIOs will form oversight committees or partner with outside consultants who can guide the organization through the implementation and help set up guidelines to promote responsible use,” says Rod Cope, CTO at Minneapolis-based Perforce.  “While investing in AI provides tremendous value for the enterprise, implementing it into your tech stack requires thoughtful consideration to protect you, your organization, and your customers.”

While the rise of generative AI will certainly impact human jobs, some IT leaders, such as Ed Fox, CTO at managed services provider MetTel, believe the fallout may be exaggerated, although everyone will likely have to adapt or fall behind.

“Some people will lose jobs during this awakening of generative AI but not to the extent some are forecasting,” Fox says. “Those of us that don’t embrace the real-time encyclopedia will be passed by.”

Still, if there’s one theme for certain it’s that for most CIOs proceeding with caution is the best path forward. So too is getting involved.

CIOs must strike a balance between “strict regulations that stifle innovation and guidelines to ensure that AI is developed and used responsibly,” says Tom Richer, general manager of Wipro’s Google Business Group, noting he is collaborating with his alma mater, Cornell, and its AI Initiative, to proceed prudently.

“It’s vital for CIOs and IT executives to be aware of the potential risks and benefits of generative AI and to work with experts in the field to develop responsible strategies for its use,” Richer says. “This collaboration needs to involve universities, big tech, think tanks, and government research centers to develop best practices and guidelines for the development and deployment of AI technologies.”

More on generative AI: