Before using a gen AI tool to do research or school work, examine your course syllabus or ask your professor to determine if your class allows the use of gen AI. Different professors at EMU have different policies about AI use.
AI generated content is challenging traditional definitions of intellectual property. Historically, intellectual property law has assumed that human creativity and ingenuity are the basis for ownership. However, when AI systems produce content—often trained on existing works without attribution—the question of who owns the output becomes less clear, especially when the content lacks a direct human author.
Sources on AI and Copyright:
Generative AI systems are trained on vast amounts of human-generated content, which means they inherit—and often amplify—the biases present in that data. In fields like health research, longstanding disparities have resulted in certain populations, especially minority groups, being understudied. As a result, generative AI tools may reproduce these gaps in knowledge and reflect biased perspectives without acknowledging their limitations. Worse, they often present information with a tone of confidence that can mislead users into accepting biased or incomplete outputs as objective truth.
"Artificial Intelligence makes manifest of bias that has always been there." -Jutta Treviranus, Director and Founder of the Inclusive Design Research Centre in the "We Count!" lecture: https://disabilityethicalai.org/2023/05/we-count/
Generative AI requires significant computational power, which comes with a substantial environmental cost. The data centers that run large language models consume large amounts of electricity and often rely on fresh water for cooling. Studies have estimated that a single AI-assisted search can produce over five times the carbon emissions of a standard web search. While generative AI has the potential to support environmental research and optimize sustainability efforts, the long-term ecological impact of these systems remains uncertain. Visit the How Hungry is AI dashboard for one research groups analysis of the environmental costs of AI.
Using AI for evidence synthesis presents challenges due to the “black box” nature of many AI systems. This term refers to the lack of transparency in how AI models generate results—often without publicly available or understandable documentation of their algorithms and decision-making processes. As a result, it becomes difficult to reanalyze, validate, or reproduce findings, which undermines the rigor and reliability expected in evidence-based research.
The Ouroboros Effect refers to the ancient Greek symbol of a snake eating its own tail—a metaphor for self-reinforcing cycles. In the context of AI, it describes a potential risk where generative AI models begin to train on content that was itself generated by other AI systems. As AI generated content becomes more widespread, future models may increasingly rely on this synthetic data. Over time, this feedback loop could degrade the quality and originality of AI outputs, potentially leading to a collapse in reliability and usefulness of the tools.
“Garbage in, garbage out” is a foundational principle in computing and data science. It means that the quality of an AI system’s output depends directly on the quality of its input data. If flawed, biased, or inaccurate information is used to train an AI model, the results will reflect those same issues. When poor-quality outputs are then used as new training data, the cycle compounds—leading to increasingly unreliable and misleading results.
