In April of 2023, Jeff Morgan, a mathematics professor and associate provost at the University of Houston, decided to give ChatGPT a math test. Specifically, he wanted to determine if the massively popular artificial intelligence platform—still less than six months old at the time—could solve the type of question he would pose to the sophomores in his linear algebra class.
He asked the machine, “Can you determine the number of positive definite 2 by 2 real symmetric matrices whose entries are integers from -10 to 10?” When he later recapped this quiz for a post on the UH AI/ChatGPT blog, he wrote that the problem was one he could solve by writing a few lines of code. In other words, it was the sort of quiz the world’s most powerful, publicly available AI chatbot should be able to parse with ease.
Instead, the machine made a series of simple mistakes, wrote some gibberish that Morgan called “pseudo code,” and came up with an answer of 310, well off from the true answer of 986. The professor concluded that ChatGPT seemed to understand the question, but badly fumbled the logic needed to answer it. When he shares results like this with his class, Morgan’s message to the students of 2023 is “buyer beware.”
“I was very open and honest with them,” he says. “The underlying statement to them was, ‘Hey, be careful with this stuff because it can give as many wrong answers as it gives right answers.’”
Their capacity to comprehend data and communicate results seems potentially limitless. And yet, ChatGPT would still flunk Professor Morgan’s linear algebra class.
The past year has felt, by any reckoning, like an inflection point in the progression of artificial intelligence, the branch of computing that broadly gives machines the ability (or, at least, the perceived ability) to learn, to reason and to converse at or beyond the abilities of human beings. Even more exciting: The machines seem to be able to analyze new data without being told what kind of data it is, or that it’s even data. Their capacity to comprehend data and communicate results seems potentially limitless.
And yet, ChatGPT would still flunk Professor Morgan’s linear algebra class due to AI’s clear limitations. We asked Morgan and two other UH faculty members who study AI in different capacities to assess the technology and to guess where it’s going. As undergrads, executives, creatives, researchers, policymakers and tinkerers all seek to sort out the future of AI, the view from UH’s cognoscenti is that the machines have promise—if we can get to know their strengths and weaknesses.
AI DEMANDS A NEW KIND OF COMPUTER LITERACY
Professor Meng Li counts himself firmly in the camp of AI believers. One day, he expects most of us will be using some sort of AI tool to help us with our jobs, with relatively little downside. Until then, his re- search shows, people and AI are in an uneasy period of getting to know one another, and the degree of trust that we put into AI must be situational. Not all AI applications are created equal, and as we encounter these new tools, we need to be aware of limitations—theirs and ours.
In the rush to deploy AI, researchers haven’t had enough time to put guardrails on the technology. Platforms such as ChatGPT operate on enormous amounts of data—inputs that the machines examine and synthesize into conclusions. In Li’s view, AI product developers run a risk of gathering so much data and processing it so quickly that they don’t leave enough time to “clean” the data (that is, to examine the data for underlying flaws or biases) or to properly safeguard people’s privacy.
“Most people didn’t expect ChatGPT to be so influential,” Li says. When it debuted in 2022, ChatGPT became the fastest commercial app in history to reach 100 million users. The speed at which the world has adopted some form of AI has far outpaced the speed of research on its strengths and drawbacks. “We are slower as researchers, but I think we are catching up now. We are trying to understand whether AI can cause particular problems and how we address bias, collusion, all of these things.”
Responsible AI usage also means relying on it to execute certain tasks on which it outperforms humans. Yet many users—physicians, for instance—don’t believe the machines can outdo them ... yet.
“We find there is resistance in adopting AI from very smart people,” Li says. Doctors may not want to cooperate with machines that may one day take their jobs, or they simply may not agree with the conclusions these guidance systems offer. In either event, that mistrust could impede patients from getting the best treatment.
Li’s research also finds that people tend to forgive mistakes that other people make. But they have slim patience for a mistake by an AI tool.
In his research, Li notices similar results in other settings as well—indications that people regard AI as an interloper of sorts. When buyers and suppliers haggle, for instance, he sees suppliers quote higher prices to the buyers when the buyers reveal they’re using an AI chatbot to lead the negotiation. People making buying decisions for large retailers resist AI’s suggestions on what orders to place.
However, users do trust AI implicitly for routine tasks such as making schedules. On tasks that incorporate what one might call judgment, people are reluctant to accept an AI’s suggestion, even when data show the AI system excels. Li’s research also finds that people tend to forgive mistakes that other people make. But they have slim patience for a mistake by an AI tool.
“Probably that’s human nature,” Li says. It may take years for managers to fully trust that an AI knows as much as it does. Until then, he says, it’s important to realize that one of our limitations as people is our caution, our mistrust. While AI has its flaws, it can almost certainly help us make stronger decisions at work.
Professor in the Conrad N. Hilton College of Global Hospitality Leadership
AI WILL MAKE US SMARTER TRAVELERS, SO LONG AS IT HAS A HUMAN TOUCH
Traveling is fraught with contradictions. Travelers want novelty, but they also want some familiar comforts. Travelers want the thrill of taking a chance, but they also prefer to keep risks to a minimum. Travelers want to feel like someone new—but of course, they are still their same old selves.
Since Professor Cristian Morosan studies how AI intersects with travel and hospitality, it should come as little surprise that he continues finding oxymorons. It turns out AI is great at synthesizing oodles of data about individual travelers and about restaurants, attractions, lodging and activities. AI can recommend how to build a schedule and can even go so far as to book dinner reservations or event tickets.
But for whatever reason, Morosan’s research shows, travelers simply don’t want to take recommendations from a machine, no matter how spot-on it might be. (Perhaps the feeling of the advice being too spot-on can unsettle a person.) Instead, it turns out, people prefer that hoteliers offer personal recommendations (even if those recommendations are AI-generated).
“Hoteliers are trying to get a set of systematic data that is clean and actionable in one single place. Right now, they don’t have that,” Morosan says. “So, they’re trying to piece that together from multiple systems: from the property management system, from the reservation systems, from their interactions with the consumers, from what consumers disclose about themselves in the loyalty program.”
Data this fractured needs a machine to sort it. But a traveler wants a person to deliver the results. The hotels offering the best customer experience, then, will be those giving a personal touch to a busy traveler. When a person arrives at a hotel jet-lagged, hungry and disoriented, how can their host offer the smartest, most-informed suggestions with a personal touch?
AI’s leap into the future was a long time in the making. These advances have relied on numerous technologies that have been maturing for years.
“Back in the day, I remember we did research on how people search for information on booking rooms and stuff,” Morosan says. “And what we found is that consumers find the right product immediately, but they’re going to keep searching anyway.”
The key to quieting the traveler’s anxiety about making suboptimal choices will probably be a blend of AI-based data-crunching and a host who looks them in the eye and gives what feels like an off-the-cuff (but brilliant) suggestion for a great sushi place, within a short walk of the hotel, which just happens to be near a great jazz club that serves fantastic bourbon cocktails. If, in fact, that’s your thing.
AI will eventually permeate the hospitality industry in ways the end user won’t detect, Morosan says. It will inform beverage management programs, small-scale events and personalized experiences. It could underpin dynamic room pricing that makes booking hotel stays feel like monitoring flight prices, giving the hotel industry tools to maximize occupancy or to raise prices automatically when a huge event is announced. (Taylor Swift just posted at midnight on Instagram that she’s coming to town in six months? The pricing system will bump up room rates in a jiffy.)
If Morosan’s predictions prove correct, AI will make increasingly more suggestions that take the homework out of travel. That is, if people wish to listen to its answers.
“Hotels shouldn’t necessarily have the goal of adopting AI as part of the services,” he says. “The broader goal should be to reimagine how they fit in today’s world and what is their service. By addressing that, they will automatically figure out the role of AI in this equation.”
AI WILL MOVE EVERYONE TO THE FAR RIGHT OF THE BELL CURVE. THEN WHAT?
Slide into a discussions about ChatGPT with Professor Jeff Morgan and you’re likely first to be transported back to 1950. That’s when Alan Turing, the grandfather of modern computing, published a paper called “Computing Machinery and Intelligence,” wherein he introduced the so-called Turing Test—a measure of whether a machine can convince a human that it was, in fact, another person.
In the 1970s and 1980s, researchers pioneered reasoning systems that amounted to forerunners of today’s machine learning. In the 1990s and early 2000s, the industry introduced neural networks, which gave the appearance of creativity and original “thoughts” that couldn’t be reconstructed simply by examining the inputs.
AI’s leap into the future was a long time in the making. These advances have relied on numerous technologies that have been maturing for years. And, as it happens, ChatGPT is still prone to many errors. (The gaffes AI made in Morgan’s linear algebra question demonstrate this point.)
When it debuted in 2022, ChatGPT became the fastest commercial app in history to reach 100 million users.
Still, the recent explosion in AI tools represents important progress, in Morgan’s view. Just as graphing calculators and spreadsheets have given amateur statisticians and scientists computational powers once reserved for brilliant mathematicians, AI will raise the roof of what’s possible for almost everyone who uses it.
“In a lot of areas, what AI is going to do is, it’s going to take everybody from the least capable up to maybe the 95th percentile,” Morgan says, “and it’s going to throw ’em all up to the 95th percentile. It is going to change this whole notion of a bell curve, I think, in a lot of areas.”
At UH, a group of several dozen faculty and staff discuss AI on a dedicated listserv, of which Morgan is a member. During the life of that discussion, Morgan says, the prevailing attitude has moved from one of awe and concern to one of patient curiosity. What will AI mean for current students? For future students? For the university itself?
Over time, Morgan has come around to take the long view. Some things about learning, and about life, endure for good reason: Books, classrooms and human teachers are here for the long haul. Generally speaking, people gravitate towards other people. As for discernment? Morgan says, “Machines in general just have a hard time determining whether some things make sense.”