Applications of artificial intelligence (AI) are already advancing human understanding and having a tangible, positive impact on the health and well-being of communities. For example, AI-based tools have already strengthened diagnostic capabilities in medical fields and are helping disaster prevention efforts. But the use of AI-based tools also raises ethical and social issues that instructors may want to consider and discuss with students.
Data and privacy
AI platforms are trained on massive data sets filled with personal information, everything from names and addresses to information about an individual’s behaviors and preferences. These data sets are used to both generate and improve the performance of the algorithms that make AI platforms work. Outside of the companies that have developed AI platforms, few understand exactly what data was/is used, how it is process, where it is stored, and who can access the data. Beyond this information being used without user consent, there are risks that the information could be used to manipulate, mislead, or surveil individuals.
Bias inheres in artificial intelligence platforms in at least two ways. First, the structure and function of AI platforms reflect the values, assumptions, and experiences of the decisionmakers responsible for their development. Second, the datasets that train AI platforms and the algorithms that drive their functionality reflect and propagate the biases that pervade the societies and cultures that produced the data in those datasets. When prompted, generative AI platforms often reinforce cultural stereotypes that, among other things, privilege white males and those from North America and Europe, sexualize and objectify women and girls, and marginalize the contributions and lived experiences of people of color.
Even though AI output can seem remarkably authoritative, AI platforms aren’t repositories of facts. Instead they formulate output by identifying patterns among the trillions of data points in their datasets. Put very simplistically, AI platforms generate sentences (or images or programs) by choosing the words (or visual components or source code) most likely to follow the previous word. In addition, because it is trained on data produced by humans, it frequently repeats inaccuracies and misinformation found in the original dataset.
In addition to the bias that pervades the datasets upon which AI platforms are built, AI tools raise significant equity issues. Although most AI platforms currently provide users with free access options, the output from free versions is often less robust than that generated by premium versions. Additionally, given the cost involved in maintaining free versions, companies may eventually move to an entirely fee-based model in the future. This “digital divide” may also arise from students’ different levels of AI literacy. Students from areas with less reliable access to technology or from secondary schools that banned AI may be less prepared to use AI tools to their benefit.
AI is energy intensive. A recent assessment of the energy demanded by AI systems predicts that in 2027 about 1.5 million servers will annually draw 85.4 terawatt-hours of electricity. While AI might boost efficiency and theoretically offset energy consumption, historically, new technology tends to boost demand in a way that exceeds the energy savings gained through efficiency.
The datasets upon which AI platforms were trained included the copyrighted creative and intellectual work of artists, scholars, and writers from around the world and across time. Additionally, many AI platforms acknowledge that they integrate user prompts and conversations into the datasets used to refine the platforms. Thus, when students feed draft essays into AI platforms, they are, in effect, consenting to having their intellectual and creative work integrated into the AI tool.
The output from generative AI platforms cannot accurately cite the materials used to generate that output. Responses to prompts may include “citations,” but these may or may not provide a traceable path to original sources. Moreover, AI platforms have been known to fabricate or “hallucinate” citations to texts and intellectual products that do not actually exist. Thus, conventional thinking about and identifying instances of plagiarism are will not be particularly useful. Moreover, because AI platforms are constantly advancing, it is unlikely that technological applications will be sufficient in identifying if and when students use AI platforms to complete assignments.