oldentomologybldg.png

Risks and Limitations of Generative AI in Research and Creative Activities

Federal Government Shutdown Updates

A federal government shutdown began on October 1, 2025. Federally funded projects may experience delays in communication, funding actions, and approvals. Please refer to the Sponsored Programs Administrion office for guidance. learn more

General Risks and Limitations

Be aware of potential misinformation, inaccuracies, biases, unintentional harm, inappropriate content, and algorithmic implications in the output generated by generative AI.

Recommended Practices for using 3rd party Generative AI Systems: 

  • Don't put sensitive university or research data into a 3rd party Generative AI system unless it is approved by MSU IT.
  • Carefully consider the impact of using generative AI before entering data.
  • Critically evaluate and corroborate information obtained from generative AI.
  • Understand the expectations of disclosure requirements for using generative AI in academic work.
  • Stay informed about conversations around AI technology and adhere to updated guidance from the university. 

Preparing Publications

Ensuring compliance with journal publication standards is crucial when using generative AI in publication preparation, focusing on these key aspects: 

  • Journal Policies on AI Use: Confirm if the journal accepts AI-generated or AI-assisted content and understand their policy on disclosing AI's role.
  • Authorship and Originality: Adhere to the journal's guidelines on authorship and originality, especially regarding content created with AI assistance.
  • Data Integrity Verification: Rigorously verify the accuracy and integrity of data processed by AI to align with the journal's standards.
  • Disclosure of AI Involvement: Transparently disclose the extent of AI involvement in the research and manuscript preparation.
  • Review for Biases or Inaccuracies: Carefully review any AI-generated content for potential biases or inaccuracies before submission. 

Peer Review

Generative AI should be avoided in the peer review process to uphold confidentiality and integrity, particularly noting: 

  • Confidentiality and Integrity: Generative AI can compromise the secure and unbiased nature of the peer review process.
  • Intellectual Property Exposure: There is a potential for unintentional disclosure or misuse of authors' confidential or proprietary information.
  • Compliance with Regulations: Adherence to specific policies set by funding agencies and institutions is critical.
    • NIH Notice NOT-OD-23-149 strictly prohibits the use of generative AI in the peer review of grant applications and proposals due to confidentiality concerns​​.
    • NSF Notice to the Research Community prohibits uploading any content from proposals, review information, and related records to non-approved generative AI tools. Proposers are encouraged to indicate in the project description the extent to which if any, generative AI technology was used and how it was used to develop their proposal. 

Grant Preparation

Grant preparation using generative AI should be approached with a clear understanding of the risks and responsibilities: 

  • Agency Policies: Stay informed about and adhere to the specific policies of funding agencies regarding AI use in grant writing.
  • Investigator Responsibility: The investigator must take full ownership of the entire proposal, even when assisted by generative AI.
  • Originality Assurance: Ensure the proposal genuinely represents the research team’s own ideas, maintaining the integrity of the content.
  • Non-Compliance: Be aware that any issues like plagiarism or falsified information in AI-assisted content can lead to serious actions from the funding agency.
  • Risk Acknowledgment: Embrace the use of generative AI only with a full understanding of the potential risks involved. 

Equitable, Inclusivity, and Ethical Use

Generative AI can amplify bias, produce misleading or false results, and work against our values of inclusion and equity. This can lead to erosion of trust, damaging the pursuit of knowledge. Therefore, use of generative AI calls for special care to avoid some known problems: 

  • Lack of Transparency: Use caution when using generative AI services where there is a lack of transparency about the material used to train the model or where processes to protect against bias and hallucinations are unclear.
  • Bias: Be aware that training material may not be representative or inclusive of points of view, which can skew results in ways that reinforce what is represented and omit what is not
  • Ethical sourcing of training material: Models may rely on material that has been harvested without the knowledge or consent of creators or copyright holders.