
Ciaran Williams
Science and Environment Editor
We’re surrounded by artificial intelligence, so what are its implications in education? This month, The Plant interviewed Robert Stephens, a professor involved with artificial intelligence, in the hopes that he could provide answers to some of our questions about the use of AI at Dawson.
Q1: Are professors currently exploring ways to integrate artificial intelligence into their lectures and grading?
Grading, no. But I think a lot of faculty are realizing that it can be helpful for doing things like drafting instructions, or generating examples for assignments or discussions, etc… For example, if you wanted to create a bunch of practical scenarios to test students’ understanding of some concept, or to ethically analyze them or something, an A.I.-generated list can be super helpful – assuming you carefully validate and edit the suggestions.
And it is definitely useful for generating test questions in multiple choice format, that sort of thing. Again, it takes a certain level of expertise to be able to weed through the outputs and be judicious with what to actually use – but it’s a hugely efficient way to sort of gather ingredients quickly and then start organizing.
Q2: Is there some kind of faculty or administrative body that is responsible for looking into, or regulating the use of AI at the college? If there isn’t, should there be one?
There are discussions at many levels. It is difficult because different programs/disciplines have different needs: the ways a student might use or misuse ChatGPT in an English or French course may be quite different from using it in a Science Lab report. Some classes are going to be incorporating A.I. into the way students are trained/taught, because professions in the future will expect some competency and familiarity with these tools. But other classes are focused on building basic level skills in communication, research, critical thinking etc. – — so in that case we wouldn’t want students using A.I. tools (yet), as it would mask whether or not they actually have developed the skills on their own.
The College as a whole has basically left the existing Academic Misconduct policy in place, and it is up to teachers in courses to explain what would and/or would not constitute misconduct in the case of A.I. usage in their course and in their particular assignments. Some departments have adopted department-wide policies, but the College as a whole has not adopted a singular policy on A.I., which I think is probably the right stance at this point.
What a LOT of faculty have done is rearrange their classes to have more high-stakes work done entirely in-class, to moot the A.I. question. Vvery few are still sending take-home writing assignments worth substantial marks. This has caused more stress for students, obviously, since many [find] writing essays, etc. in class, time-constrained, to be not ideal. And many teachers have found it sad to suck up so much class time with writing assignments (having to cut content, cut back on more interactive activities). It also has had a secondary effect of overwhelming the Student AccessAbility Center in the past couple of years, since the number of “tests” has essentially skyrocketed, especially in courses that didn’t traditionally have in-class “tests.”.
Some other things will likely be regulated as we go forward, for example, there will almost certainly be restrictions on the use of some of the particular tools on College computers (like DeepSeek?), based on privacy and data concerns.
Q3: Do you think the adoption of artificial intelligence into education curricula is inevitable? If so, what are some of the drawbacks and benefits you can see?
Yes, it is inevitable. And I think the benefits are definitely there, though it will take a while for everyone to find the sweet spot and work out the bugs. The main benefit is the speed of processing and sifting information, and the ability to suggest ideas and paths forward when brainstorming, etc. In an ideal world, A.I. could be a personalized tutor for students, available 24/7. I’d like to be able to rely on A.I. tools as a force multiplier for me, as the teacher — if a chatbot could help and guide students as I would have, then I should welcome it into my course! And the technology is there to shape an A.I. assistant for a course that would constrain itself to “help” only as much as I would have personally helped if a student came to me in person. Obviously I would “help” a student in real life, but I wouldn’t do all the work for them, you know? I would maybe fill in a few blanks to get them on track and then push them to work out the rest on their own… if the A.I. tool does that, then great. The trick is: how do you constrain it to “help” but still ensure the student actually learns how to do the thing on their own?
And that is the drawback: currently the tools are super helpful, but also programmed to never stop helping, and if you keep asking for more, eventually they will answer and/or solve all the problems you throw at them. So a student could potentially have the A.I. doing all the work, and learning nothing from it. That’s no good. If a student is struggling with an assignment, I don’t say “come to my office, I’ll do it for you.”. But the A.I. will. And then we get into a situation where no learning is happening.
These tools are excellent and useful in the hands of people who can use them well: i.e., people who already have the knowledge and competency and expertise to validate the outputs, and who essentially could have done this one on their own.
An example would be in Creative Arts – let’s say Graphic Design. We now have access to incredible A.I. tools. I can use these tools to create posters, or logos or whatever… But I have no real skill or knowledge of the art of Graphic Design, so I will just take what I get, and not really appreciate whether or not it is any good, and in what ways it could have been so much better or more effective. The Graphic Designer does have this expertise – so in [their]her hands, the tool is useful: [they]she can work quicker than before, perhaps, and then competently evaluate the A.I. outputs and use them effectively and wisely. But this assumes a Graphic Designer who learned [their]her craft before these tools… How, going forward, do we teach young aspiring Graphic Designers to use the tools, but to not let the tools substitute for their own nascent skills? And if all the students turn in A.I.-assisted work, how will we be able to identify the really talented ones? This will be the tricky part. And it’s not just the Creative Arts where this problem is already happening: Computer Science has this problem (coders obviously will need to learn to use A.I.-assist techniques, but how do we ensure they don’t rely on them too heavily without learning the underlying principles and skills?) Same thing in Science and Social Science in terms of research: using A.I tools for research will be standard going forward, but we best not lose the ability to do individual research along the way!
Q4: Per college policies, what are some ways students use artificial intelligence to help with their assignments?
I sort of addressed this above – there are many ways to use these tools to help get started on a project, or to help sift through a large volume of detail in order to find a through-line for a particular project or research question. There are also tons of ways to use these tools creatively almost as a foil or partner to bounce ideas off of and help clarify a direction that you haven’t been able to nail down. There are definitely faculty who are using the tools in this sort of way already.
I’d like to plug a contest Dawson is currently running, through S.P.A.C.E. and DawsonA.I.!
The goal here is to partner with A.I. tools to develop and make something – and to reflect on the usefulness of the tools along the way. We think this is a model for how A.I. could be effectively incorporated into student work, and it’s a fun opportunity for students to really plumb the depths of what these tools can (and cannot) do.
Q5: What kind of advancements do you think we expect to see made in the field of artificial intelligence in the coming years?
I think A.I. agents are the next step, and that’s where, economically speaking, we are going to see more of a revolution. There is a LOT of human labour that is easily replaceable in the coming years, and not just graphic designers, coders, and writers. I mean pretty much ANY human whose job involves data and reporting, which is basically every white collar job. And mechanical agents will replace a ton of blue collar work (drivers, for example; picking and shipping departments; factories, etc.) the only safe careers in the next generation will be things that involve personal engagement inside people’s homes (e.g., plumbers, or itinerant caregivers for the elderly). There will still be “humans in the loop” whose jobs will be to validate and sign off on A.I. outputs – but there will be far fewer of them. Basically, become a plumber or a CEO. That’s my advice.
Q6: What resources are available to students that would allow them to learn more about AI, whether they be accessible through the college or external organizations?
DawsonA.I. and Dawson S.P.A.C.E. are constantly operating to offer activities to students and workshops to faculty. Like the “prompt and validate” challenge mentioned earlier. We have also run some “Data Storytelling” challenges in past years, and more silly/fun like a ChatGPT songwriting contest, and a GPT “jailbreak” contest — all oriented to getting students to try out these evolving tools and learn their uses and misuses.
Dawson A.I. also has an ongoing relationship with the A.I. LaunchLab, a local non-profit focused on A.I. education. We have three cohorts a year of students who get a 10 week training program for free [(!]) and the chance to keep going to work on an independent project and even paid internship possibilities. This is open not only to tech-savvy students, but to absolute beginners in non-technical fields.


Leave a comment