Last updated January 26, 2024
Michigan State University's interim guide to using generative AI in research and creative activities establishes a framework for responsibly managing research data in alignment with state and federal laws, institutional policies, and intellectual property rights. It provides directives on their ethical use and cautions against risks such as misinformation and bias. Furthermore, this document outlines best practices for employing generative AI in various research processes, ensuring its application supports the university's mission while adhering to legal and ethical standards.
Data Compliance
Ensure all data use complies with state and federal laws and institutional regulations, including MSU's acceptable use and institutional data policies. This section specifically pertains to the use of third-party commercial generative AI and not necessarily applicable for local or enterprise generative AI options, a review and consideration by MSU IT's Governance, Risk, and Compliance team is required to ensure compliance. Ethical considerations aligned with the university's mission, vision, and values are also crucial.
Data Classification:
- Public Data: Generative AI can process publicly available information, general academic concepts, and non-sensitive data. However, its use must comply with MSU’s policies and consider ethical and reputational implications.
- Confidential/Private Data: Do not enter confidential data including, but not limited to, social security numbers, contact details, name/image/likeness, and any information covered by FERPA, HIPAA, or other regulations into any generative AI product without documented approval from MSU IT Governance, Risk, and Compliance (GRC), as well as express written consent from any other necessary parties.
- Research Data: Researchers must consider the nature and sensitivity of scholarly data before using generative AI tools to support research. Do not put data that are confidential, contain sensitive information, or are subject to specific legal or ethical requirements (e.g., human subjects’ data) into any generative AI tools without documented approval from MSU IT GRC, as well as express written consent from any other necessary parties.
- Intellectual Property: Avoid inputting proprietary or confidential information into generative AI, especially unpublished research findings, internal university data, or information protected by intellectual property rights without written consent from all stakeholders. Entering novel research findings into any generative AI product could constitute public disclosure and invalidate or alter MSU’s future intellectual property rights.
- ITAR, EAR, and CUI Compliance: Strictly avoid using generative AI for data that falls under International Traffic in Arms Regulations (ITAR), Export Administration Regulations (EAR), and Controlled Unclassified Information (CUI). Researchers must ensure they do not intentionally or inadvertently use generative AI to process, store, or transmit data that could be classified under these regulations. The mishandling of such data through generative AI platforms could lead to serious legal and security implications.
General Risks and Limitations
Be aware of potential misinformation, inaccuracies, biases, unintentional harm, inappropriate content, and algorithmic implications in the output generated by generative AI.
Recommended Practices for using 3rd party Generative AI Systems:
- Carefully consider the impact of using generative AI before entering data.
- Critically evaluate and corroborate information obtained from generative AI.
- Understand the expectations of disclosure requirements for using generative AI in academic work.
- Stay informed about conversations around AI technology and adhere to updated guidance from the university.
Preparing Publications
Ensuring compliance with journal publication standards is crucial when using generative AI in publication preparation, focusing on these key aspects:
- Journal Policies on AI Use: Confirm if the journal accepts AI-generated or AI-assisted content and understand their policy on disclosing AI's role.
- Authorship and Originality: Adhere to the journal's guidelines on authorship and originality, especially regarding content created with AI assistance.
- Data Integrity Verification: Rigorously verify the accuracy and integrity of data processed by AI to align with the journal's standards.
- Disclosure of AI Involvement: Transparently disclose the extent of AI involvement in the research and manuscript preparation.
- Review for Biases or Inaccuracies: Carefully review any AI-generated content for potential biases or inaccuracies before submission.
Peer Review
Generative AI should be avoided in the peer review process to uphold confidentiality and integrity, particularly noting:
- Confidentiality and Integrity: Generative AI can compromise the secure and unbiased nature of the peer review process.
- Intellectual Property Exposure: There is a potential for unintentional disclosure or misuse of authors' confidential or proprietary information.
- Compliance with Regulations: Adherence to specific policies set by funding agencies and institutions is critical.
- NIH Notice NOT-OD-23-149 strictly prohibits the use of generative AI in the peer review of grant applications and proposals due to confidentiality concerns.
- NSF Notice to the Research Community prohibits uploading any content from proposals, review information, and related records to non-approved generative AI tools. Proposers are encouraged to indicate in the project description the extent to which if any, generative AI technology was used and how it was used to develop their proposal.
Grant Preparation
Grant preparation using generative AI should be approached with a clear understanding of the risks and responsibilities:
- Agency Policies: Stay informed about and adhere to the specific policies of funding agencies regarding AI use in grant writing.
- Investigator Responsibility: The investigator must take full ownership of the entire proposal, even when assisted by generative AI.
- Originality Assurance: Ensure the proposal genuinely represents the research team’s own ideas, maintaining the integrity of the content.
- Non-Compliance: Be aware that any issues like plagiarism or falsified information in AI-assisted content can lead to serious actions from the funding agency.
- Risk Acknowledgment: Embrace the use of generative AI only with a full understanding of the potential risks involved.
Equitable, Inclusivity, and Ethical Use
Generative AI can amplify bias, produce misleading or false results, and work against our values of inclusion and equity. This can lead to erosion of trust, damaging the pursuit of knowledge. Therefore, use of generative AI calls for special care to avoid some known problems:
- Lack of Transparency: Use caution when using generative AI services where there is a lack of transparency about the material used to train the model or where processes to protect against bias and hallucinations are unclear.
- Bias: Be aware that training material may not be representative or inclusive of points of view, which can skew results in ways that reinforce what is represented and omit what is not
- Ethical sourcing of training material: Models may rely on material that has been harvested without the knowledge or consent of creators or copyright holders.