This article is based on a panel discussion at IBA’s Fall 2022 Analytics Conference on Cybersecurity including Scott Curtis, VP of Service Delivery at Deloitte Consulting, Greg Hayworth, Associate VP of Enterprise Data Science at Humana, Ergin Soysal, Senior Advisor – NLP, Eli Lilly & Company and Akanksha Mansharamani, M.S. in Information Systems student and Vice President of MSIS Women in Technology.
In five years, “ChatGPT” is going to be a $2,000 question on Jeopardy that everyone misses.
At least that’s what Scott Curtis, Vice President of Service Delivery at Deloitte Consulting, said. While he may have been joking, Curtis and two other panel members agreed that while Natural Language Processing (NLP) and Natural Language Generation (NLG) won’t go away, in the next few years, new generative language models will come and fade quickly.
Curtis and Greg Hayworth, Associate VP of Enterprise Data Science at Humana, and Ergin Soysal, Senior NLP Research Scientists at Eli Lilly and Company, spoke in a panel during the Spring 2023 Analytics Conference on NLP/NLG, sponsored by the Institute for Business Analytics at the Kelley School of Business. The panel, moderated by Akanksha Mansharamani, an MS in Information Systems student and Vice President of MSIS Women in Technology, discussed current practices, challenges, and career opportunities in NLP and NLG.
Current NLP and NLG Model Practices and Challenges
All three panel members use some form of generative language models in their work, and Hayworth encouraged listeners to think beyond ChatGPT as the main generative language model.
“I want to stop using the word ChatGPT,” Hayworth said. “I want us to think about generative large language models in general, because ChatGPT is the one that’s the most famous today. Instead, think about how we are taking advantage of models that are able to take language in and generate new language.”
While Hayworth and his coworkers at Humana use NLP models, Hayworth warned against using them to find facts. Instead, he suggested using semantic search to find facts within your own knowledge bases.
“Take advantage of the large language model to create a fluent response from the results of your semantic search about your domain-specific information,” Hayworth said. “When you start doing those sorts of things, you become able to create a lot of value in a short period of time. But if you’re counting on one model to know everything about everything, it will fail you.”
The panel also discussed how to measure the effectiveness of generative language models. According to Soysal, it can be easy to measure the technical performance of a model, but difficult to validate the business benefits. Hayworth proposed three areas to measure in business contexts: revenue, costs, and speed.
“In a business context, you’re probably going to measure one of three things,” Hayworth said. “Either it’s increasing revenue, it’s decreasing costs, or it’s making you go faster. If you can’t put it in one of those three contexts, you’re going to have more difficulty getting your senior leaders to buy in.”
Curtis also recommended setting a baseline expectation for a model’s performance to determine if it’s performing better than the current practices or processes in place.
“Is the model better than what you have today?” Curtis said. “You have to make sure that you can measure your baseline today and the impact of the model or technique you’ve implemented.”
Ethical Considerations
As Hayworth mentioned, these models can fail or make mistakes. Curtis also pointed out two concerns that he considers in his role at Deloitte around using generative language models.
“Our clients are paying us to provide them certain services, certain strategic consulting based on our expertise,” Curtis said. “From a standpoint of putting things into generative AI to get a PowerPoint presentation or to publish a report, certainly we can consider some ethical challenges there. There are plenty of times ChatGPT gets the facts wrong.”
In addition to this ethical concern, Curtis said there’s also a security concern around using these models. Curtis’s clients, including pharmaceutical companies, healthcare providers and administrators, or financial services companies, may share a portion of their data with Deloitte.
“We can’t be cavalier about what we do with their data, saying ‘here’s this new tool, I only have some idea of who’s hosting it, where it’s hosted, or who has access to that infrastructure,’” Curtis said. “But who has access to the servers, who has access to the data? All of those are considerations that we have to make sure we’re paying attention to because again, we’ve already made assertions to those same clients that we’re only storing their data in certain parameters with certain encryption, or within our cloud administered accounts.”
Soysal pointed out that data access concerns are also important for companies relying on the knowledge economy.
“With language models, your data can become part of a model and can be used, even if it’s theoretically,” Soysal said. “Open chat answers actually come from a knowledge source, and if your product is that knowledge, then you’re essentially giving that knowledge away.”
Career Opportunities in NLP and NLG
Mansharamani’s final two questions for the panel focused on job opportunities, including whether or not the members of the panel worry that human jobs may be replaced by NLP technology. According to Hayworth, the jobs may change, but it doesn’t mean there’s not still work for real people to do.
“These are tools that we can use in productive ways to do more interesting work, do it faster and do it cheaper, but it doesn’t make it so that we don’t still have to do work,” Hayworth said. “I think as tools become available that increase our personal productivity, the standard for what’s expected in that productivity will continue to grow. I’m not a philosopher, but I’m not concerned that AI is going to take over all of our jobs and take over all of the world.”
Soysal noted that some jobs that mainly do repetitive tasks may be eliminated, but new jobs that focus on generative language models may be created and job descriptions may be expanded.
“Most business students probably see job descriptions that include needing to be able to use Microsoft Excel or Word,” Soysal said. “Soon, we might start seeing those types of job requirements, plus certifications in ChatGPT.”
His main concern, however, is the long-term effects of these technologies on human brain development. For example, if a student is able to use a model to do their homework, it may limit their problem-solving abilities.
“I have concerns about developing our kids’ brains in a proper way,” Soysal said. “This is a big deal actually from that perspective, because if you don’t use your mental capacity properly, you will end up being less capable.”
In light of jobs shifting and mental capacity potentially changing, Curtis offered a final piece of advice for students: stay curious.
“Curiosity is a really key attribute in being successful in any kind of analytics,” Curtis said. “You have to understand that there are a lot of different ways to come at analytic solutions and analytic problems in deploying models and in finding the next way to improve something.”
Leave a Reply