Name
Professor Wim A. Van der Stede (CIMA Professor of Accounting and Financial Management, Department of Accounting)
Email
vanderst@lse.ac.uk
Teaching context
AC457 Data Analytics for Management Control is a postgraduate course delivered across 10 three-hour weekly sessions to a cohort of 50 students. It examines how organisations maintain control in decentralised settings — spanning action controls (that improve coordination), result controls (that improve incentives) and cultural controls (that improve matching) — with data analytics integrated throughout. Weekly sessions with a strong conceptual core (e.g., about where to [co]locate decisions [with knowledge]; whether to monitor actions [to improve coordination] or results [to improve incentives]; how to anticipate and mitigate key “agency costs”), supplemented with case discussions that apply these concepts, form the backbone of the learning experience.
What was the aim of the project?
This vignette that the Eden Centre invited me to write contains three interconnected strands that illustrate how I encourage students to engage productively with AI in my course as critical users:
1) Use, and push, AI to find examples of company practices in the context of my course’s topic and to critically evaluate them.
2) Engage the students in discussions about how AI will change my course’s subject matter.
3) Encourage students to think about how AI may play out in their own jobs.
1. “Pushing the AI” and “pushing the students” to evaluate the AI output
I use AI to surface examples of company practices relevant to management control systems, and to critically evaluate both the examples and the AI's reasoning. I want my students to be “specific” requiring “clever” prompting (lest one would be too easily satisfied with the outputs the AI produces).
There are many examples. One is about the starting point of management control, which is strategy implementation. I used a recent article from The Wall Street Journal, aptly titled: “Does This Investment Fit the Company’s Mission? Just Ask AI,” going on to argue that “too many leaders pursue projects and investments that veer from their organization’s overall strategy. Artificial intelligence could flag such disconnects.” But how good is the AI at doing this?
Another example is to find notable instances of where organisational failures occurred due to control weaknesses. “Due to” is of course a strong claim in any event, but poor control systems do vary from being an innocent bystander of an organisational failure to being more core to it. And then there is distinguishing how different types of controls, and what exactly about them, may have contributed to the failure. So, we prompted the AI to “think of [a] [recent] case[s] in the news that failed due to major internal or action control deficiencies.” [Italics added, it is important as you will see in a second.] And, “what the organisation could have done to prevent or detect the control failure[s]." This requires careful prompting because the question asks specifically for a failure reasonably attributable to action controls; not just any organisational failure (such as, say, cases of egregious earnings manipulations that may have arisen from result control issues with incentives). AI tends to be imprecise with this distinction, which is, however, only partially addressable through prompt design. Even with precise and iterative prompting, the cases the AI returned (like the Wells Fargo fake accounts scandal) are debatable in terms of the role that action vs. result controls played in the failure. Hence, the discussion doesn’t stop with “reading” the AI’s output. Instead, it is merely a starting point to critically discuss it and an input to sharpen students’ understanding of fundamental concepts like, in this illustration, action and result control.
Many of these failures the students and I could easily recognize (Boeing, Wells Fargo, Volkswagen, Louvre), but when the Mid-Staffordshire NHS Trust Patient Care scandal came up, I asked the students how they knew this case, or everything the AI stated about it, wasn’t a hallucination? I couldn’t be sure, and I admitted it. (As a teacher you don’t have to know everything, but you should encourage students to challenge everything.)
Moreover, and as a segue into the third section below about using AI at work, this was a good way to highlight recent “embarrassments” faced by reputed firms, such as one of the global Big-4 (“accounting”) firms that had to refund the Australian government after admitting it used AI on a report that was riddled with mistakes. Indeed, the UK Financial Reporting Council, an accountancy regulator, warned that the big-4 firms were failing to monitor how AI affected the quality of their audits. Not a bad message for accounting students to hear.
2. Reflecting on how AI will change the course’s subject matter
Beyond engaging students to use AI to find application cases to augment their understanding, I engage them in discussions about how AI will also likely change the course’s very subject matter.
This was discussed by way of an article that was published in the middle of term: When AI Becomes an Agent of the Firm. The following excerpt from the abstract has it:
The integration of artificial intelligence (AI) into firm decision making parallels the emergence of the professional manager, which prompted the birth of agency theory. We examine the evolution of AI through an agency theory lens, considering how the nature of firm control and decision rights change as AI evolves.
This prompted us to discuss and contrast the “human agent” (i.e., employee) with an “AI agent” and whether and how consequences of “reinforced learning” of AI may produce similar symptoms (e.g., performance misreporting, biased resource forecasts) like “self-interest” by humans would, though even so, should not be mistaken as stemming from the same “agency problem.” We also discussed whether an AI agent could be “incentivized” (such as by making access to computing power conditional on performance). Etc.
More generally, these are hard questions at the heart of how companies might structure (or constrain) delegation, adjust (or expand) monitoring, and assign accountability when the “worker” is an ever more capable (and possibly an autonomous) agentic AI. Many organisations are still treating this as a tooling decision, when in practice it is already becoming a management control question.
3. “Adapting deeply” and “Leaning into what AI cannot do”
Finally, I encourage students to think about how AI may play out in their own jobs or begin to affect the skills they will be required to exhibit to get and retain their job. My nudges to them are twofold:
-
Adapt deeply. Do not just use AI as a glorified browser. Learn to use it seriously. Push it. (There are plenty of opinion pieces being written about what this means, and I included a section on my Moodle page called “Miscellaneous stuff” where I shared some of these.)
-
Lean into the inimitable, irreplaceable. Double down on relationships, physical presence (note that class participation is 20% of my course grade), and — very relevant for the subject of this course — accountability. Per the second strand above, even with advanced agentic AI, some “monitoring” will still involve human “auditing” (music to the ears of accountants) and human “approvals” (of AI proposed actions). Someone still will have to understand if and how the system works (as intended) and be willing to put their name on the line when things go wrong.
Together, I hope these three strands reflect an evolving approach to teaching in which engaging critically with AI is not an add-on but is woven into the intellectual fabric of the course itself in different but mutually reinforcing ways. There is no hiding.