A Quick Intro to Generative AI
Generative AI is a type of artificial intelligence that can create new content—like text, images, code, and more—based on patterns it’s learned from existing data. Think of it as a supercharged assistant that can help brainstorm, summarize, translate, and develop content.
Large Language Models, Explained by a Large Language Model
Large Language Models (LLMs) are the brains behind tools like ChatGPT. They’re trained on massive amounts of text and can generate human-like responses. They don’t “think” or “know” things—they predict what comes next in a sentence based on patterns. So when you ask a question, they’re not pulling facts from a database—they’re guessing the most likely answer based on what they’ve seen before.
Principles for Responsible AI Use
DePaul’s framework for generative AI emphasizes thoughtful integration and institutional accountability. Key principles include:
-
Transparency: Faculty and students should understand what AI tools can and cannot do. Clear communication about limitations and ethical concerns is essential. Students should always check with instructors before using any AI tools. When using AI tools, make sure to disclose and use citations. Students and faculty should be clear about the use of AI tools in the classroom.
-
Fairness and Inclusivity: AI systems must be evaluated for bias and designed to support diverse learning needs. DePaul fosters an environment of fairness and belonging, and these themes must be central to the use of new AI technology.
-
Data, Privacy, and Security: Protecting student data and complying with privacy regulations is non-negotiable. Sensitive data should never be shared with public AI tools. Treat them like public forums—if you wouldn’t post it online, don’t feed it to a bot. If you would like to use any university data with AI tools, it must go through a Security review. It also must comply with the Access to and Responsible Use of Data Policy.
-
Reliability and Safety: Tools should be vetted for accuracy and predictability before being introduced into the classroom. AI tools can produce false, misleading, fictitious, irresponsible, or inappropriate content. Please use with caution, and the understanding that AI tools should never be viewed as a final source for truth or finished product. Faculty are encouraged to use discretion and careful oversight when interacting with these powerful tools.
These principles serve as a foundation for responsible experimentation and innovation.
Potential Benefits of AI
Used wisely, generative AI can be a game-changer:
- Speed up writing, editing, and brainstorming.
- Personalize learning and support student engagement.
- Automate repetitive tasks so you can focus on the creative stuff.
- Help students explore ideas, refine drafts, and think critically.
Risk: Generative AI — Intellectual Property, Data Privacy, and Cybersecurity
Generative AI tools can be powerful, but they come with real risks:
-
IP Concerns: AI might generate content that’s too close to copyrighted material.
-
Privacy: Anything you enter could be stored or used to train future models.
-
Security: Sensitive data should never be shared with public AI tools. Treat them like public forums—if you wouldn’t post it online, don’t feed it to a bot.
Risk: Large Language Models — Unreliable Outputs
LLMs are great at sounding confident—even when they’re wrong. They can “hallucinate” facts, make up sources, or misinterpret your question. Always double-check what they give you, especially if you’re using it for research, teaching, or decision-making.
It’s best to use Copilot Chat for most business purposes at DePaul (if approved for use)
If you use Microsoft’s Copilot Chat by logging in with your BlueKey credentials, many of these common AI-related risks are addressed. Even when using Copilot Chat, make sure it is approved for your specific use.
Copilot Chat has numerous safeguards already built into the app. Microsoft ensures enterprise-grade security, privacy compliance, and no training on your data. It respects your policies and protects against most AI risks.
In light of these protections available through Copilot, we urge the DePaul community to utilize this AI tool rather than using other, potentially less secure entry points for AI, such as ChatGPT, Google Gemini, and the many other AI services that potentially put DePaul at risk, especially for data leakage.
Questions about using generative AI?
Before using any AI tools, all DePaul students, faculty, and staff are encouraged to consult with your schools, departments, instructors, or managers. Be transparent about your interest in using artificial intelligence and make sure it is approved through the appropriate authority or university entities.
- DePaul students are encouraged to review course materials for each class and follow the guidance of their instructors. Never use AI tools without direct guidance and approval from instructors and school administrators.
- DePaul faculty members are encouraged to learn more about AI in the classrooms through this Teaching Commons website. Faculty can always reach out to the Center for Teaching & Learning about AI tools in the classroom.
- DePaul staff members are encouraged to visit this page for information about DePaul AI tools.
Questions about using a new AI tool? Please submit a ticket with the Help Desk before using or purchasing any new AI tools or software at DePaul. Our team can help you navigate through the potential risks and benefits of these new and powerful tools.
Looking Ahead