Skip to Main Content
Eastern Michigan University Halle Library

Generative AI

This guide walks you through the considerations of using generative artificial intelligence (gen AI): selecting gen AI tools, best practices for using them, ethical considerations, and more.

Does your class allow the use of gen AI?

Before using a gen AI tool to do research or school work, examine your course syllabus or ask your professor to determine if your class allows the use of gen AI. Different professors at EMU have different policies about AI use. 

Intellectual Ownership

AI generated content is challenging traditional definitions of intellectual property. Historically, intellectual property law has assumed that human creativity and ingenuity are the basis for ownership. However, when AI systems produce content—often trained on existing works without attribution—the question of who owns the output becomes less clear, especially when the content lacks a direct human author.

Sources on AI and Copyright: 

Bias

Generative AI systems are trained on vast amounts of human-generated content, which means they inherit—and often amplify—the biases present in that data. In fields like health research, longstanding disparities have resulted in certain populations, especially minority groups, being understudied. As a result, generative AI tools may reproduce these gaps in knowledge and reflect biased perspectives without acknowledging their limitations. Worse, they often present information with a tone of confidence that can mislead users into accepting biased or incomplete outputs as objective truth.

"Artificial Intelligence makes manifest of bias that has always been there." -Jutta Treviranus, Director and Founder of the Inclusive Design Research Centre in the "We Count!" lecture: https://disabilityethicalai.org/2023/05/we-count/ 

Environmental Cost

Generative AI requires significant computational power, which comes with a substantial environmental cost. The data centers that run large language models consume large amounts of electricity and often rely on fresh water for cooling. Studies have estimated that a single AI-assisted search can produce over five times the carbon emissions of a standard web search. While generative AI has the potential to support environmental research and optimize sustainability efforts, the long-term ecological impact of these systems remains uncertain. Visit the How Hungry is AI dashboard for one research groups analysis of the environmental costs of AI. 

  • Luccioni, Sasha, Yacine Jernite, and Emma Strubell. "Power hungry processing: Watts driving the cost of AI deployment?." Proceedings of the 2024 ACM conference on fairness, accountability, and transparency. 2024. https://dl.acm.org/doi/pdf/10.1145/3630106.3658542
  • Jegham, Nidhal, et al. "How hungry is ai? benchmarking energy, water, and carbon footprint of llm inference." arXiv preprint arXiv:2505.09598 (2025). https://arxiv.org/pdf/2505.09598 
  • Kneese, Tamara, and Meg Young. "Carbon emissions in the tailpipe of generative AI." Harvard Data Science Review Special Issue 5 (2024).
  • Crawford, Kate. “Generative AI’s Environmental Costs Are Soaring — and Mostly Secret.” Nature (London), vol. 626, no. 8000, 2024, pp. 693–693, https://doi.org/10.1038/d41586-024-00478-x.

Reproducibility and the "Black Box"

Using AI for evidence synthesis presents challenges due to the “black box” nature of many AI systems. This term refers to the lack of transparency in how AI models generate results—often without publicly available or understandable documentation of their algorithms and decision-making processes. As a result, it becomes difficult to reanalyze, validate, or reproduce findings, which undermines the rigor and reliability expected in evidence-based research.

The Ouroboros Effect and Garbage-in, Garbage-out

The Ouroboros Effect refers to the ancient Greek symbol of a snake eating its own tail—a metaphor for self-reinforcing cycles. In the context of AI, it describes a potential risk where generative AI models begin to train on content that was itself generated by other AI systems. As AI generated content becomes more widespread, future models may increasingly rely on this synthetic data. Over time, this feedback loop could degrade the quality and originality of AI outputs, potentially leading to a collapse in reliability and usefulness of the tools.

“Garbage in, garbage out” is a foundational principle in computing and data science. It means that the quality of an AI system’s output depends directly on the quality of its input data. If flawed, biased, or inaccurate information is used to train an AI model, the results will reflect those same issues. When poor-quality outputs are then used as new training data, the cycle compounds—leading to increasingly unreliable and misleading results.

Black and white clipart style simple image of a serpent eating its own tail.