WPCCOVER STORY

The AI Revolution

The AI Revolution typography

Beyond the Industrial Age into uncharted territories

By Betsy Loeff
As artificial intelligence reshapes everything from productivity to medicine, experts highlight its transformative potential and the urgent need to address emerging ethical and societal challenges.

“The societal implications of AI are potentially more profound than those of the Industrial Revolution,” says Pei-yu “Sharon” Chen, Red Avenue Foundation Professor and chair of the Department of Information Systems.

Many share Chen’s perspective. A 2023 report published by consulting giant McKinsey & Company predicts that “Generative AI is poised to unleash the next wave of productivity” (McKinsey & Company, 2023, The Economic Potential of Generative AI).

What makes AI such a game-changer? Along with its capabilities, “AI is easily accessible as a general technology to everyone without any deep technical background or a lot of capital,” Chen says. AI is also applicable to any industry or sector. For instance, Chen sees it used in medicine for pharmacology research, patient care, diagnostics, and more.

full body shot of Professor Pei-yu “Sharon” Chen

Pei-yu “Sharon” Chen
Red Avenue Foundation Professor and Chair of the Department of Information Systems

On the diagnostics side, she recalls a news article about a mother’s search for relief for her child who was suffering chronic pain that had stymied 17 doctors. “Then the mom decided to give ChatGPT a try. She typed in her son’s symptoms, and ChatGPT successfully made the diagnosis of her son’s condition.”

In drug development, Chen continues, “You cannot test everything. AI can work through huge combinations of drugs and ingredients and filter down the combination that should go on to clinical trial.”

Despite its potential, AI poses risks. One is the bias that may be embedded in historical data. Chen points to the case of Amazon’s AI-based recruiting tool, which screens job applicants: Like many tech companies where women have historically been underrepresented, Amazon’s data contained fewer résumés from women, causing the algorithm to reflect past hiring patterns as a preference.

Another risk is disinformation. “Propaganda and misinformation are not new,” says Geoff Pofahl, clinical assistant professor of information systems. “They’ve been around for a long time, but now people can produce misinformation at a mass scale and make it visible to millions of people.”

full body shot of Assistant Professor Geoff Pofahl
Geoff Pofahl
Clinical Assistant Professor of Information Systems

In one instance, a software developer and an idea person teamed up to create a website that would promote one ideology and pan another. “The site was designed to crawl the internet and scrape news headlines from legitimate websites. Once it had those headlines, it had generative AI write counterstories,” Pofahl explains. The site had fake writers, articles, and readers populating the comments section. It produced 20 articles daily for a month. The cost? $400.

“Platforms like X and Facebook are already set up to feed people exactly what they want to see,” says Pofahl, who adds that easily created fake news sites can “exacerbate the information echo chambers where people see only what they want to see, true or not.”

This polarizing example is one reason why using AI mindfully is among the issues schools like ASU will address. Of course, leveraging AI is also part of the curriculum.

Pofahl teaches two of the three classes in an AI certificate program, and he tells his students, “If you can describe it, you can do it with AI.” After all, AI knows languages, including English, French, Mandarin, Java, HTML, Python, and more. “The most powerful thing about generative AI is its ability to translate problems and tasks that we can describe in human language to computer code,” Pofahl says.

Getting to know AI better

AI is so critical to business success these days that the W. P. Carey School has several initiatives and programs aimed at helping students, faculty, staff, and business leaders better understand it.

One is the Center for AI and Data Analytics (AIDA) for Business and Society (aka, the Mindful
AI center). Within AIDA are three research programs:

  • The Actionable Analytics Lab supports an applied research lab that provides rigorous solutions to real-world problems.
  • The Digital Society Initiative fosters collaboration within the IT sector to answer questions of strategic importance.
  • The ASU Blockchain Research Lab focuses on research to advance blockchain-based technologies.

For those looking at AI as a career opportunity or workplace advantage, the W. P. Carey Certificate in Artificial Intelligence in Business consists of three five-week classes, including:

  • Artificial Intelligence in Business is where students learn the fundamentals of AI and machine learning.
  • Python for Data Analysis is a hands-on course that teaches high-level programming basics and concepts.
  • Machine Learning in Business teaches students to select appropriate machine learning algorithms and interpret their outcomes to help guide organizational decision-making.
Generative AI is poised to unleash the next wave of productivity.
—Pei-yu “Sharon” Chen
Red Avenue Foundation Professor and Chair of the Department of Information Systems
The school also offers a Bachelor of Science in Artificial Intelligence in Business and a Master of Science in Artificial Intelligence in Business (MS-AIB)—the first of its kind at a graduate business school—at the Tempe campus.

According to Dan Mazzola, faculty director for the MS-ISM (Master of Science in Information Systems Management) program and assistant chair and clinical professor of information systems, the new AIB master’s degree rounds out a three-legged stool of IS education the W. P. Carey School offers.

full body shot of Professor Dan Mazzola

Dan Mazzola
Faculty Director for the Master of Science in Information Systems Management program and Assistant Chair and Clinical Professor of Information Systems

The school also offers a Bachelor of Science in Artificial Intelligence in Business and a Master of Science in Artificial Intelligence in Business (MS-AIB)—the first of its kind at a graduate business school—at the Tempe campus.

According to Dan Mazzola, faculty director for the MS-ISM (Master of Science in Information Systems Management) program and assistant chair and clinical professor of information systems, the new AIB master’s degree rounds out a three-legged stool of IS education the W. P. Carey School offers.

“If you are a quant-heavy person and love data, the Master of Science in Business Analytics might be for you,” Mazzola says. “If you’re looking for IT leadership, the Master of Science in Information Systems Management might be a good fit. If you want to focus on innovative and mindful applications at the intersection of AI and business, the Master of Science in Artificial Intelligence in Business could be right for you.” The bachelor’s degree has a similar focus as the master’s degree.

Mazzola says the programs are open to diverse backgrounds and don’t require an undergraduate degree in technology. “That said, we will have business classes and courses where students will be hands-on in writing code. You’ve got to have an aptitude for this,” he adds.

W. P. Carey integrates mindful AI into all the courses students take, from cybersecurity to business strategy.

“Why do we have education?” Chen asks. “It’s to shape values and ethics, so we know good and bad.” She points to the idea of recycling, something most of us now feel good about when we do it and guilty when we don’t. “We wouldn’t have felt that way without education.”

Think of the site as a giant Rubik’s Cube. There are all these cells, and inside the cells are all your products for sale. The system decides how to get products out of the big cube most efficiently.
—Elliot Rabinovich
AVNET Professor in Supply Chain Management

In store for e-commerce

It’s been more than two decades since Amazon began encouraging site visitors to click the “buy” button with an AI-driven recommendation engine. Now, recommenders are commonplace, and online retailers use artificial intelligence to tackle other issues, says Elliot Rabinovich, AVNET Professor in Supply Chain Management.

One issue is those pesky returns. Shoes, for instance, offer endless options when you consider brands, styles, colors, sizes, and widths. Worse, brands don’t standardize sizes, so some online retailers now use AI to estimate and recommend the best size for a customer based on how different brands fit.

Fulfilling orders is another area where AI works. “Organizations are using robots to load and unload trailers at distribution centers,” Rabinovich says. “They also use image recognition algorithms to have robots handle order picking. The robots go into the warehouse and get items for shipping.”

Kroger went big into robotic fulfillment, opening “dark warehouses” to support delivery orders. “They call them ‘dark warehouses’ because there’s no human labor involved,” Rabinovich explains. “Think of the site as a giant Rubik’s Cube. There are all these cells, and inside the cells are all your products for sale.” More than 1,000 cell-sized robots whiz around each warehouse, stopping at cells as needed to deposit goods that have just arrived at the site or to pick up goods to add to someone’s grocery order.

AI coordinates the movements of those robots, zooming from cell to cell. “The system decides how to get products out of the big cube most efficiently,” Rabinovich explains. “It prioritizes orders that need faster delivery than others, doesn’t need a lot of energy, and takes less time. The whole building is AI-driven.”

Aaron Matos
(BS Management ’95)

Want fries with that?

If you search the McDonald’s website for a crew member job in your area, virtual recruiting assistant Olivia will greet you. She’ll guide you through the process, screen you, schedule an interview with a hiring manager if there’s a good fit, answer your questions, and even handle the onboarding communications.

Olivia is the product of Paradox, a firm founded and now chaired by Aaron Matos (BS Management ’95), who also started Jobing.com. “We like to think of it as putting a recruiter in every store,” Matos says. “We’ve reduced McDonald’s time to fill a position from 17 days to under three.”

The conversational, AI-driven virtual assistants Paradox develops do what Matos calls “administrative stuff … all the friction in the process.” He believes people should handle “the art of recruiting, figuring out if you are a good fit for this team, this project of this company.” However, Olivia and assistants like her can do the rest, including sending out background checks before the people make an offer to the job hunter.

portrait image of Aaron Matos

Aaron Matos
(BS Management ’95)

Based on Paradox’s success, other business folk agree with this viewpoint. Unilever, Lowe’s, GM, Disney, and nearly a thousand other firms worldwide use Paradox technology.

“We believe technology will continue to chip away at tasks people don’t want to do,” Matos says. “The reason we call the company Paradox is because we think that AI, by eliminating the administrative work, allows people to do higher-value work, more strategy, more interesting creative work.”

He adds that in the case of McDonald’s, which doesn’t have recruiters for stores, the virtual recruiting assistant saves managers three to four hours each week they would have spent reading through résumés. Some customers report enhanced revenues because they’ve been able to staff their businesses fully.

Matos doesn’t believe we should use AI to make employment decisions but neither should we avoid technology. “People are worried that AI will replace jobs, and it will replace tasks,” he explains. “The reality is that all technology—from the wheel to the printing press to Microsoft Word—has always replaced work we used to do more laboriously. That will continue, so companies must invest in and train their employees to help them change their skills.”

The most powerful thing about generative AI is its ability to translate problems and tasks that we can describe in human language to computer code.
—Geoff Pofahl
Clinical Assistant Professor of Information Systems

Social media menace

An estimated 5% of Facebook accounts are counterfeit, and Meta deletes these AI-generated accounts endlessly. In November 2023, Cybernews calculated that Facebook had “deleted 27.67 billion fake accounts since October 2017, which is 3.5 times more than the total population of planet Earth.”

“Nearly half of internet traffic comes from bots,” says Victor Benjamin, an assistant professor in the information systems department. Bots are software-based robots designed to perform tasks, and much of Benjamin’s research has focused on developing artificial intelligence to identify bots online. He calls this effort a “computational arms race” because as soon as we design a detection tool, bots find a way around it.

full body shot of Assistant Professor Victor Benjamin

Victor Benjamin
Assistant Professor of Information Systems

Cybersecurity company Imperva has been tracking bot traffic online since 2013, and its 2024 report maintains that 49.6% of internet traffic in 2023 was attributable to bots. Worse, the accessibility of AI is increasing the number of bots with malicious intent. Simple bad bot traffic—traffic from bots lacking the latest evasion techniques used by advanced bots—rose from 33.4% in 2022 to 39.6% last year. The latest report also states that the U.S. is the most targeted country. It received 47% of last year’s bot attacks, followed by the Netherlands in second place with 9% of attacks.

Spot the bot

A study by George Washington University, published in January 2024, predicted “an escalation of daily, bad-actor activity driven by AI by mid-2024” just ahead of the U.S. and other global elections” (George Washington University, 2024, Predicting and Controlling Bad Actor AI Activity in a Year of Global Elections). “Tools to automate bot attacks have become much more prevalent and accessible to nation states, foreign adversaries, and even criminal groups, and they use this technology for different purposes, such as pushing propaganda and trying to scam people,” Benjamin says.

“Bots try to present a no-win scenario for the average American,” he continues. “When it comes to the election, the bot creator’s purpose is to make Americans feel despair and angry at other Americans for holding different political beliefs. Bots work at making Americans hate each other.” They do this by posting extreme views on social media and via “rage baiting,” creating or passing along a fake story to enrage some platform users.

Benjamin says distinguishing bots from real folks is challenging and getting harder each day. In 2023, researchers at Copenhagen Business School tested the bot-detection skills of 375 people evaluating real and fake Twitter accounts. Nearly 42% of study participants fingered one real account as bogus, and 90% believed one of the better-performing fakes was a person.

So, what can we do? Benjamin says simple CAPTCHA-style tests will no longer be effective at differentiating humans from bots. “Research shows that AI can defeat CAPTCHA tests. We should assume we’re progressing toward a world where software can mimic humans,” he says.

Benjamin also says that social media platforms are part of the solution. “The platforms try to pursue bot activity, but it’s still problematic. So far, actions taken have been very ad hoc. There needs to be a systemic policy in place,” he explains.

In addition, Benjamin thinks platforms should release more metadata. “If a hashtag becomes popular overnight, metadata could show users how it grew. Was it organic growth, or did thousands of accounts suddenly activate and use that hashtag simultaneously?” he asks.

Finally, Benjamin thinks regulatory moves may be necessary, and user education is crucial. “It’s not just text that large language models can craft; they can make fake videos that are pretty good,” he says. “The technological capacity to generate content will lead to increased manipulation on social media.”

Benjamin concludes: “I don’t see this issue getting better. I think it’s going to get worse. We need all hands on deck to solve this. It’s a societal problem that will increase in intensity until we address it holistically.”

Cleaner living with AI

Professor of Supply Chain Management Kevin Dooley leads a global research team on sustainability with more than 100 of the world’s largest manufacturers and retailers. He is chief research scientist at The Sustainability Consortium within ASU’s Global Institute of Sustainability and Innovation. Here are a few things he thinks AI will do in the future.

Grow up. “Because of climate change, we’re losing agricultural land,” Dooley says. “We’ve already used most available land with good soil for agriculture, but there’s increasing demand for bio-based resources for things other than food, like bioplastics, and we’ve pushed fertilizers as far as we can to increase crop yields.”

What’s next? “It’s probably going to come down to genetic engineering and precision agriculture to make that next leap,” Dooley explains. “For example, there are bee-sized drones that go out into a field and make highly local, precise measurements of humidity, temperature, acidic level, all that type of stuff.” Already, some successful farmers treateach square yard of a field as a separate farm, and that will become more common. AI’s job will be to interpret data and map out farming strategies.

Skip a week. In the future, we may all have smart garbage bins that monitor what we toss into them and learn the pace of our trash and recycling habits. “Instead of trash collection crews coming around once a week, there’s a smart system that learns and then customizes the route for the drivers so we handle household waste more efficiently,” Dooley says.

Write on. Sustainability reporting can be tedious and complex. “A lot of companies are required to disclose their actions to investors, government entities, or customers, and each of those requests for information is different,” Dooley says. “Fairly soon, AI will be able to read the requirements of what the investor or regulator wants to know, go into the databases in your company, extract the information, and automatically write it up in a customized way for that particular information request.”

Dooley also sees a day when neural networks make predictions that don’t make sense to humans. “I think that’s both a risk and the point at which we get to the true power of AI technology,” he says. “Trust in the decision-making capability of these systems has to be based on having systems designed to explain their logic. We need to translate machine intelligence back into human intelligence.”

The reality is that all technology—from the wheel to the printing press to Microsoft Word—has always replaced work we used to do more laboriously.
—Aaron Matos
(BS Management ’95)

Bots at the doctor’s office

Rebecca Clyde (MBA ’02) remembers that awful time her 4-year-old daughter’s asthma medicine ran out. “To renew the prescription required another appointment with the physician, and every time I could call to schedule an appointment, the front desk would put me on hold for 45 minutes,” she recalls. “The switchboard would time out and hang up on me. We didn’t get the appointment, and my daughter ended up hospitalized and developed chronic health problems from that incident.”
full body shot of Rebecca Clyde

Rebecca Clyde
(MBA ’02)

Clyde says the mismatch between the number of callers to medical offices and the number of people available to answer calls is common. “Everyone has a wait time,” she says—sometimes up to an hour.

Such wait times no longer impede callers to health care offices that use virtual assistants from Botco.ai, a company Clyde founded and now serves as CEO.

“Say someone is searching for a behavioral health provider,” Clyde says. “They get to the website, see our chatbot, and ask questions. Do you take my insurance? Do you have therapists who support this type of condition? Can you see me on Thursday afternoon?”

The chatbots answer questions like these, schedule appointments, and guide people through intake information gathering. They can also do it in multiple languages. “This enables providers to be available 24/7 instead of only during the hours when their switchboard is operational,” Clyde says.

Along with serving in medical offices and senior living facilities, the chatbots also work in government service agencies and nonprofits that help people connect with social services. “If somebody needs food or diapers in Arizona, we’ve partnered with Valley Leadership to create a chatbot that anyone in Arizona can access for free and request those kinds of social assistance services,” Clyde says. “Botco’s chatbots are integrated with 211 services, the call-in assistance that helps people fulfill urgent needs like shelter or rental support.”

Chatbots help deliver assistance to those in need. For health care providers and senior centers, it’s a way to provide better service and improve the conversion of callers and site visitors into clients, patients, or residents.

“Our customers track those numbers, and they’re seeing 60% to 100% increases in their conversion rates.”

The truth is out there

There are around 200 million peer-reviewed articles online that you can find and learn about through Consensus, an AI-driven search engine that locates and summarizes findings from research papers. If you’re searching for validated information, this tool can help you find it.

Along with serving up and summarizing vetted research papers, the Consensus search engine gives information about the studies’ quality. If you use the app, you’ll see that it tells you if one of the papers that surfaced is “highly cited” or published in a “rigorous journal.” The application also tells users if the study was a randomized controlled trial, or RCT. Plus, the most recent and promising research is at the top.

“Our goal is to surface the most relevant research to a user question,” says Brett Nebeker (BS Accountancy ’13, MS-BA ’14). Nebeker, who heads up the AI team and infrastructure for the company, says there are many ways the application lets researchers know if the information is valuable. “RCTs have a little green box near the citation, indicating this is a rigorous study. Some papers get a yellow mark, meaning they’re not as good, but they’re still OK. Consensus marks the least rigorous ones in red,” he notes.

full body shot of Brett Nebeker

Brett Nebeker
(BS Accountancy ’13, MS-BA ’14)

Nebeker explains that animal trials for human medicines would get a lower rating than human trials, and studies with only a few participants would rank lower than those with many. Users can also specify filters for the search—for instance, they can let the software know they want to see only RCTs.

OpenAI’s ChatGPT integrates Consensus for paid subscribers to that service. However, the “Consensus app is more powerful on its own because it has additional synthesis features not available through ChatGPT,” Nebeker says. “At the top of any query, we summarize the top paper. For a ‘yes or no’ question, there’s the Consensus Meter, which tells you the percentage of papers that would answer your question with a ‘yes,’ a ‘no,’ or a ‘maybe.’ ”

Most of the papers surface as abstracts. When papers are available in their entirety without a paywall, Consensus provides access to them.

Students and academic researchers are the most significant subset of users, but the app also sees traffic from medical professionals, business professionals, consultants, and curious individuals.

“Scientific research is some of the most valuable information ever created, but it’s also notoriously hard to consume,” Nebeker says. “The two founders of Consensus come from academic families, so they’ve always had exposure to this space. The advancements in AI and the spread of misinformation made Consensus seem like a great play.”

Right on the money

Anyone who has ever bought a car or house knows credit scoring can help or hurt you. Lenders consider credit scores one way to limit risk—a crucial issue for the financial sector, says Tian Lu, assistant professor of information systems.

Risk management means prescreening, forecasting, and prediction before making a loan. After a loan is on the books, risk management might entail delinquency prevention and bad debt recovery. “The financial sector has widely applied machine and deep learning for fraud detection, credit scoring, and risk assessment,” Lu says. AI allows lenders to deepen their analyses for more precision in risk assessment.

It’s not just text that large language models can craft; they can make fake videos that are pretty good.
—Victor Benjamin
Assistant Professor of Information Systems
Lu defines AI as data plus algorithms; now, granular behavioral data is thrown into the mix. “Many traditional financial institutions, microlending companies, and mobile payment service providers are applying various machine learning and deep learning approaches to detect users’ behavioral abnormalities,” he says. “This includes shopping at irregular times, purchasing special products, abnormal sentiment on social media posts, and prosocial intentions.”

For instance, Lu’s research found that people who spend substantially more on online game cards purchased at irregular times—such as early morning or late evening—are more likely to default on loans. Conversely, borrowers who choose more sustainable and higher-quality goods exhibit prosocial behavior. “These people will be more likely to meet financial obligations and pay their debt,” Lu notes.

He also has evaluated how cell phone data impacts credit scoring. Surprisingly, he says that cell phone data can “increase both the credit scoring evaluation accuracy and financial equality.”

Lu’s experiments with cellphone data and credit scoring were inspired by his interest in financial market equity. “Thin file” borrowers have little or no credit history and often lack behavioral data, too, so it’s harder for them to access credit. “They don’t use social media or purchase many products. But everyone today is using cellphones,” Lu says. This data gives lenders more insight into the potential borrower’s personality.

In one study, Lu found that profiling borrowers using smartphone activities was 1.3 times more effective in improving financial inclusion for these underserved, thin-file borrowers than using online social media information. It also was 1.3 times more effective in enhancing the lender’s profitability.

Lu calls this approach a successful and “mindful AI” application. “What is the success?” he asks. “It’s not just the accuracy but also the fairness and social welfare this provides. Ethical AI is also responsible AI.”

AI: Act ethically, innovate wisely

AI is revolutionizing everything from social interactions to retail, highlighting its immense potential and the ethical and societal challenges it brings while stressing the need for mindful AI education and applications to navigate these challenges effectively.

As we stand on the brink of this AI-driven future, the question is how we will harness this powerful tool and whether we can rise to the ethical challenge it presents, ensuring our technological advancements uplift society rather than divide it.

Cover spotlight:

The students featured on the cover are part of a related story on page 5, where they discuss their experiences with the AI in Business Club.