CSC-225: Prompt Engineering for Large Language Models
Anyone can type a question into ChatGPT. But getting an AI to produce exactly what you need is a learned skill. This course teaches you that skill.
What You'll Learn
Prompt engineering is the practice of designing inputs that reliably produce high-quality outputs from large language models. In this course, you'll move beyond trial-and-error and develop a systematic toolkit for working with LLMs:
- How LLMs actually work: tokenization, attention, and context windows, so you understand why certain prompts succeed and others fail
- Core prompting techniques: zero-shot, few-shot, chain-of-thought reasoning, and role-based prompting
- Evaluation and iteration: how to measure prompt quality and improve it methodically
- Multi-stage pipelines: chaining prompts together for complex tasks like research, analysis, and code generation
- Retrieval-augmented generation (RAG): grounding model outputs in real data
- Responsible use: bias detection, attribution of AI-assisted work, and sustainable deployment practices
Why This Matters
LLMs are rapidly becoming a core tool across every discipline. Not just computer science. Whether you're writing research papers, building software, analyzing data, or creating content, the ability to communicate effectively with AI systems is quickly becoming as fundamental as knowing how to use a search engine.
The difference between a vague prompt and a well-engineered one is often the difference between useless output and genuinely useful work. This course gives you the frameworks to consistently land on the latter.
Who This Course Is For
This course is open to all majors. You don't need prior programming experience. Come with curiosity about how AI language models work and a willingness to experiment. By the end of the semester, you'll have a portfolio of prompt engineering projects and the confidence to apply these techniques in your own field.