Generative AI in Research
In the domain of research, generative AI stands out as a transformative force, employing advanced algorithms to generate new insights and data interpretations from large-scale datasets. This rapidly evolving field is reshaping research methodologies, offering both remarkable opportunities and posing unique challenges. Critical to its integration in research is a deep understanding of the ethical and legal considerations involved. The provided interim guidance document serves as a valuable resource, offering researchers and scholars essential insights and guidelines to ensure that their use of generative AI in research settings adheres to ethical standards and complies with legal regulations. It is imperative to fully recognize and navigate the potential risks associated with the deployment of generative AI.
Additional Resources
National Institutes of Health (NIH)
- The Use of Generative Artificial Intelligence Technologies is Prohibited for the NIH Peer Review Process (NOT-OD-23-149) - 6/23/2023
National Science Foundation (NSF)
Journal Resources
Major scientific journals have guidelines for using AI and large language models in research, with some adopting prohibitive stances while others embrace their use, emphasizing risk and responsibility. Authors must transparently disclose AI involvement, detailing the specific models and tools used, and ensure content integrity. For detailed policies on AI and LLM use from each journal, refer to the specific journal guidance. To find some examples of guidance, you can visit the websites of the following journals:
National Institute of Standards and Technology (NIST)
Other MSU Guidance
General Guidance
Students
Educators
- Memo from the Provost - August 1, 2023
- Interim Guidance on Generative AI in Instructional Settings
- ChatGPT FAQ for MSU Educators
- Academic integrity and Generative AI Resources for Educators
- Generative AI Syllabus Guide
- Community Guidance on Generative AI