Wellington, Aug 26 (The Conversation) – The integration of artificial intelligence (AI) in educational settings often raises concerns about plagiarism and shortcuts. However, in a postgraduate business analysis course on digital innovation and strategy that was conducted in early 2025, a novel approach was employed. Students were encouraged to use AI deliberately throughout the digital innovation process, with an emphasis on reflecting upon and analyzing its impact.
Feedback from students at the conclusion of the course revealed a notable transformation: they began to perceive AI not just as a mere task-performing robot, but as a partner in innovation that necessitates careful management. This shift aligns with recent research findings suggesting AI, despite lacking consciousness, can effectively collaborate with teams and make valuable contributions.
At the commencement of the course, many students viewed AI narrowly, either as a threat or merely capable of basic automation tasks. By the end, they appreciated AI’s potential to enhance human capabilities and unlock new value, offering data-driven insights that fuel idea development. As one student aptly described, the perspective shifted from wondering if AI would replace jobs to exploring opportunities for collaboration between humans and AI.
This transformation was enriched by an assignment requiring students to track their utilization of AI, critically evaluate outcomes, and connect those experiences with strategic decision-making processes.
Two significant mindset shifts were observed:
1. From Tool to Partner: The students tackled a case study related to recruitment. Initially, they used AI for simple tasks like scanning vast numbers of CVs for keywords. Eventually, they viewed AI as a collaborative partner, engaging in deeper inquiries such as identifying skills indicative of long-term success and unearthing hidden talents from unconventional backgrounds. This led to the realization that leveraging AI wasn't just about employing an AI tool; instead, a strategic vision involved designing a recruitment service where AI forms the cornerstone of the business model, matching company culture with candidate potential. The perspective evolved from viewing AI as an add-on to recognizing it as a crucial design factor.
2. From Blind Trust to Responsible Use: The course initially sparked enthusiasm towards AI, which gradually transitioned into critical habits. Students became adept at verifying sources, identifying “AI hallucinations,” and weighing privacy, bias, and accountability trade-offs. One student noted the initial tendency to uncritically trust AI results, highlighting the importance of credibility checks. Students consistently expressed concerns regarding transparency, fairness, and the absence of clear organizational guidelines in workplaces. Many concluded that the deployment of AI is as critical as its capabilities. They emphasized the importance of framing ethics as central to design – focusing on the intent behind use cases, data impact, affected stakeholders, and decision-making transparency.
Students encountered risks firsthand when some AI tools produced seemingly confident yet inaccurate outputs, prompting healthy skepticism and a habit of testing AI against domain knowledge and external evidence. Reflections indicated a shift from passive usage to active evaluation and responsibility.
Several students expressed intentions to continue enhancing their AI skills with a critical approach and anticipated applying these insights to family businesses and small enterprises, where AI tools can significantly enhance service and decision-making. One student articulated, "I now perceive myself as a professional who must apply AI thoughtfully."
This responsible and informed mindset remains the course's most crucial outcome. AI transcends mere efficiency, raising ethical considerations and necessitating thoughtful governance.
Why This Matters Beyond the Classroom
Today's workplaces contend with dual realities where AI can expedite routine tasks while altering how and where value is generated. The student-centered approach applies to organizations as well. Recommendations include:
- Anchoring AI in intent, starting with desired outcomes before selecting tools and data.
- Treating ethics as an integral part of design rather than mere compliance, ensuring checks for bias, privacy, and data integrity are embedded within workflows, and maintaining transparency in AI decision-making.
- Investing in fluency beyond tools; adapting thinkers trained across systems know when to trust, verify, or adapt, resulting in deeper digital literacy.
- Measuring value at the business model level, as gains often arise from new revenue streams or reduced risk rather than just time savings. (The Conversation) SCY SCY
Excerpt ends here.
(Only the headline of this report may have been reworked by Editorji; the rest of the content is auto-generated from a syndicated feed.)