This paper is co-authored by Chewei Liu, Hillol Bala, Arun Rai (Georgia State University), and Akshat Lakhiwal (University of Georgia).
In March 2020, educators and students in the U.S. witnessed an abrupt, disruptive transition: the move to an online-only mode due to the COVID-19 pandemic. People may seek resources to gain relief from stressors, including this one, through questionable behaviors, which the authors of this study define as behaviors that, unlike outright delinquency or illegal actions, skirt the boundary between ethical and unethical behavior, violate normative structures, and threaten the overall well-being of society.
The authors focus on changes in questionable behaviors with novel technologies powered by artificial intelligence (AI) algorithms, which can make it more difficult to detect questionable behaviors and empower individuals to engage in these questionable behaviors during major disruptions. They explore the rise in questionable behaviors online during the COVID-19 pandemic, as well as the need for immediate attention by researchers, educational institutions, platforms, and policy makers due to this unintended rise in agency that novel AI-enabled technologies have over questionable behaviors.
Statement of Problem
AI technologies, which are utilized at scale on digital platforms, have pervaded mainstream activities across a number of life and work contexts. These technologies give individuals the resources to cope with unanticipated, severe disruptions, like the switch to online-only education in March 2020. In a state of extreme stress, individuals may attempt to gain stress relief through engaging in questionable behaviors. These behaviors include questionable research practices such as altering research data, academic plagiarism, browsing prohibited websites at the workplace, and misusing online identities.
The authors’ main research objective is to determine whether, and to what extent, AI technologies in digital platforms play a role in individuals’ engagement in questionable behaviors during periods of severe disruption.
Data Sources
The authors drew data for the data from SimilarWeb.com, a web-based analytics engine that collects webometrics for more than 80 million websites on the internet. They used SimilarWeb’s audience interest tool to create a list of tools and platforms that attract similar audience types and have overlapping online search interest within five categories: paraphrasing, essay mill, notes sharing, grammar or text editing, and wiki-style platforms.
Paraphrasing, essay mill, and notes sharing websites were categorized as “questionable tool platforms,” while grammar or text editing and wiki-style platforms were categorized as “acceptable tool platforms.” Daily web-traffic data were procured for 78 websites, all of which had more than 5,000 unique monthly visitors, with particular interest in AI-enabled questionable tool platforms that could aid and conceal plagiarism and dampen academic integrity.
The authors also used data from Google Trends to examine weekly search interest of keywords related to questionable tool platforms between January 2019 and May 2020.
Analytic Techniques
The authors focus on the educational context during the transition to online-only education, which they call the “COVID Transition.” The research is designed to determine the change in questionable behaviors due to the COVID Transition by contrasting web metrics, including daily visitors on various digital platforms that may or may not enable QBs and render their services through AI or non-AI enabled means.
The authors first examine the relative effect of the COVID Transition on the change in daily unique visitors to both questionable and acceptable tool platforms. They examine how information disclosures that demonstrate structural assurances to the users of these platforms can increase this effect and suggest the amplifying role of aspects that decrease users’ fear of detection. The authors also collect and analyze data on search engine interest for paraphrasing questionable tool platforms, COVID-19 cases in universities across different states in the U.S., and differences across states in individuals’ access to socio-economic resources that can help them endure such disruptive periods.
Results
The authors find that the COVID Transition resulted in an increase in daily visitors to both questionable and acceptable tool platforms, but questionable tool platforms saw a disproportionately higher — around 5% — increase in visitors than acceptable tool platforms. The disproportionate increase in daily visitors was highest for questionable tool platforms with paraphrasing tools, or web-based text transformation tools that use AI-based algorithms to create plagiarism-free and undetectable copies of text. There was also an increase in visitors to platforms that demonstrate structural assurances to the visitors, including website security through the display of secure sockets layer (SSL) certification and use of algorithms to render the service.
The data from Google Trends also showed an increase in online search interest for keywords related to questionable tool platforms from 2019 to the time following the COVID Transition. The increase was higher for states that reported higher rates of COVID-19 cases in universities in 2020, and even higher in states where individuals and households show less socio-economic resilience to endure disruptive periods like natural disasters or pandemics.
Business Implications
In addition to showing the rise in hard-to-detect AI-enabled questionable behaviors following the COVID Transition, the results showed that AI could lower detectability of questionable behaviors and encourage deviant coping behaviors during severe disruptions. According to the authors, these could have far-reaching social implications and impacts on the integrity of creative, research, and educational fields.
The authors suggest a holistic response to successfully combat AI-enabled questionable behaviors. They recommend strategies that include building awareness about the potential harm of these technologies through campaigns at educational institutions; requiring platforms to collect users’ consent to not use the tools with AI technologies available for plagiarism; and partnering with parents, communities, and faculty to provide distressed students with support and alternative coping mechanisms.
Leave a Reply