Because of potential data security and privacy issues that exist with generative AI tools, EMU has developed policies about using these tools as part of the data analysis process during research. The Division of Information Technology's Sensitive Data Guide has information on whether specific types of data can be used in EMU's subscriptions to generative AI tools.
In accordance with this guidance, consult with the EMU Office of Research Compliance before entering any research data into these tools. They can be reached by email at research_compliance@emich.edu or by phone at 734-487-3090.
If you are planning a future research project and have to prepare an IRB submission, your submission should detail how any generative AI tools will interact with your research data.
Because of generative AI, some funding entities have begun receiving dramatically higher numbers of grant proposals than they used to. The National Institutes of Health (NIH) even announced a policy change in 2025 that limits applications to six applications per Principal Investigator every year, noting that generative AI tools had allowed some PIs to submit up to 40 applications for a single funding opportunity. The NIH will also reject applications that they believe to contain sections that are substantially AI-generated.
When it comes to using generative AI in grant applications or for grant-funded research, the exact rules you must follow will differ from one funder to another. However, here are some general principles to keep in mind:
Just like with publications, the author(s) of a research proposal have final responsibility for all of its contents. If you use generative AI for your proposal, and it introduces a fake citation or a sentence that plagiarizes another source, do not assume that it will be judged differently just because the AI tool did it.
For example, the NIH said in their 2025 notice "Supporting Fairness and Originality in NIH Research Applications" that they always consider plagiarism and fabricated citations to be research misconduct, even if they are produced by an AI tool instead of a person. Assume that any issues with your proposal that were caused by an AI tool will be treated as an error that you, personally, made.
Not all funders directly address whether and how they allow you to use generative AI in writing research proposals. Even if they do address it, the language might be vague or not describe the specific way you want to use it.
As an example: Researchers sometimes prompt generative AI to rewrite their proposal so it is understandable to the people who will review it, who might not understand your research area or work in your discipline at all. However, revising a proposal this way involves substantial enough changes by the AI tool that some funders would not allow it.
Unfortunately, few funders specifically name which uses for AI are allowed and which ones aren't. Just like with any other grant-related issue, it is your responsibility to ask the funder about any points of confusion and ensure that you understand their requirements.
Keeping a record of the tools you used, the prompts you entered, and the results you get back can help provide proof that you followed the funder's rules. When it comes time to share your findings, publications and conferences may require details about how you used generative AI too, so it's best to document this as you're doing it.