Universities are currently grappling with integrating AI into various aspects of campus life, such as teaching, administrative tasks, student work, intellectual property rights, plagiarism detection, and even admissions. Jeffery Graves, the chief compliance officer at University Risk and Compliance Services, emphasizes the importance of comprehending AI’s multifaceted challenges and opportunities. He and Julie Schell, the assistant dean for instructional continuity and innovation in the College of Fine Arts, have held presentations for faculty members and administrators to strategize and enhance our community’s understanding of the potential benefits and risks.
“One of the primary functions of a compliance program in an organization is conducting risk assessment,” Graves says. “So, much of the analysis aligns with how we typically approach matters in the realm of privacy and information security.”
Narrow AI, also called focused or directive AI, is known for its task-oriented nature, such as self-driving cars, AI-driven chatbots, or Alex Huth’s decoder program. These applications excel in well-defined tasks, such as data analysis, but lack broader, creative thinking capacity. Huth and his team now employ AI to direct their MRI machine and process the data it acquires. In Huth’s case, the AI ethics and compliance go hand in hand with a lot of what already exists about privacy, permission and publication of findings, such as following HIPPA and FERPA.
Graves emphasizes that narrow AI research often mirrors non-AI research. Regardless of the field, all research, including AI research, must adhere to ethical and legal standards.
In contrast, generative AI can produce new content, such as text, images, audio and video, based on patterns learned from extensive datasets. A notable example of generative AI is ChatGPT. These AI models have the potential to create content across various domains and have generated significant interest in academia. Graves points out several unique challenges associated with generative AI, which often produces outputs that can be mistaken for human-created content.
Institutions must consider issues such as informed consent when using AI to generate content. If an AI-generated document or research contains personal data, questions arise about how and when permission was obtained. Determining ownership and copyright of AI-generated content is also a complex issue. For example, if a researcher were to produce a first draft of their findings in ChatGPT or Bard with confidential information, the AI’s data matrix now would be populated with information that might be presented to another researcher elsewhere in another institution or country. There’s also the issue that AI service providers might claim ownership over the output, raising questions about intellectual property rights.
AI models can inadvertently mine confidential data and use it to train their algorithms, presenting significant privacy and security concerns, especially when handling student records, health information or other sensitive data. To effectively navigate these AI-related challenges, universities are taking proactive measures, including education and awareness initiatives, acceptable use policies, compliance and legal oversight, and integration into the curriculum. Future endeavors may involve enterprise-wide AI licenses to ensure data privacy, standardized AI usage policies and more comprehensive AI-related curricula.
AI’s expansion within higher education holds both promise and challenges. While AI offers transformative potential, institutions must carefully weigh the ethical, legal and compliance aspects to harness its benefits.
“We cannot afford to ignore the presence of AI,” says Graves. “It’s here to stay. It’s wiser to deal with it intelligently rather than to ignore its impact.”