the Age of ……

it was the Industrial Age, followed by the Information Age, or Digital Age; or Knowledge Age (Bereiter, 2002). and 2.5 decade into the 21st century, the next Age perhaps has arrived — the Age of Pretending.

just saw Richard Stallman gave this lecture at Georgia Tech some two weeks ago, and he coined the term “PI”, no, not private investigator, but Pretend Intelligence:

“…nowadays, people often use the term artificial intelligence for things that aren’t intelligent at all because they don’t understand anything and they don’t know anything… promoted the most for large language models, generators as I call them, because they don’t know anything. They generate text and they don’t
understand really what that text means…

Every time you call them AI, you are endorsing the claim that they are intelligent and they’re not. So let’s let’s refuse to do that. So I’ve come up with the term Pretend Intelligence. We could call it PI. ” (12:25-12:49)

calling it PI is great, cos very catchy, and cos some singaporeans prided themselves that they like abbreviations a lot (and often these many assumed initialisation is the same as acronym).

was discussing with my new younger friend vera 清雅 (清秀、雅丽, a beautiful name), who happened to share with me an article on the phenomenon of influencers. and influencers essentially banked on “percevied expertise, trustworthiness, and attractiveness” (Duckwitz & Strasser, 2025, p.2). the keyword here is — percevied. and what’s an (best) act that can influence pple? yes no prize for guessing it, wayang (where one defines wayang as involving some act of pretending, intelligence or otherwise).

it’ll be interesting to observe where all these are going some years down the road, how AI or PI is going to advance humanity, in the domain of human intelligence, pretending, or wayang.

a day in the life of the researcher, story teller, and learning designer

today feels like a ‘high’ point for the past 3.5 years. and saying goes, what go up must come down? but whatever. impermanence is real so just record this moment ala in titus’s advert: “不在乎天长地久,只在乎曾经拥有”

today is the first day of our newly launched 3-day workshop. was fully subscribed and we saw 24 participants today. the course is unique as it is the first course in the academy’s offering that’s designed based on the social constructivist philosophy of learning, and a case study was developed to anchor the course and enable the knowledge creation (social) discourse. the main facilitator is my 院长 (aka MD aka 大大老板), who’s a highly experienced comms practitioner and leader with over 30 years of 实战经验 in govt/public comms. with her, my buddy viv, is the co-facilitator.

by design, the learning activities in the classroom today were filled with conversations after conversations — facilitator-peer & peer-peer; and there was but a small segment of transmissionist component at the beginning. towards the end of the day, my buddy prompted participants for a quick feedback on the case study (can’t wait till day 3). one participant felt that the MND experience appeared to be ‘idealistic’ to him. to me, the feel of ideal or otherwise, is often very much contextualised within individuals’ current or past experiences. the case study had not depicted the pains explicitly (e.g., time crunch, limited budget, human resources, paperwork … u name it, we’ve all experienced it) as i have chosen to focus on the ideas that fostered the close collaboration between policy and comms & engagement colleagues, and how they had worked (or toiled?) together towards a common goal. in short, the lived experiences but revealed the possibility of things working out in reality; not a tale of imagination in the case of a case scenario. for the other three participants who shared, am very glad to hear that the MND experience had facilitated their taking away of (personal) learning. which meant, my 初衷 had worked.

and after class ended, my 院长 texted:

“Thanks for your excellent case study that made this workshop possible.”

receiving this msg meant a lot to me, as it’s the first case study (among the nine I’ve written so far) that received affirmation from both facilitator and participants/readers. moreover, it’s an affirmation of the first attempt at a dialogic learning design for a new workshop. strictly speaking, the seasoned educator among us will know that the skilful facilitator is the critical factor in either make or break. therefore, i must thank 院长/MD’s 👏👏👏 facilitation in enabling the tool to perform its design.

if the workshop were to go for a second run (cos who knows what will happen; impermanence of life is but truth), i look forward to the creation of a second case study to enable future knowledge creation discourse.

shall see … …

small year-end observation of GenAI/LLM/transformer

gotten my own evidence of how far GenAI, based on the Transformer model, is going (or is going nowhere) yesterday while finalising my last piece of homework for 2025. GenAI, based on the Transformer model, works fundamentally by predicting what word(s) come next. and what made this ‘prediction’ possible? the dataset used in the training that the models have gone through informs this. in short, the Transformer, while ‘creative’, is creating based on existing patterns derived from dataset. and who created this dataset(s)? human thinking, thoughts, ideas, formed into words in the pre-GenAI era. and that dataset has long runout by now. you may read this article by de Gregorio to see all the ideas i have mentioned fall together.

long story short, whatever LLM provides you, it’s something that existed out there in its mega training dataset.

so, now back to my observation. this is the statement i wrote/created:
“With the advent of generative artificial intelligence (GenAI), cyber actors have harnessed it for autonomising complex hacking activities”

after feeding the statement into PAIR (powered by claude), platform suggested:

“autonomising” –> “automate“ (clearer expression)

what’s clear, what’s not clear is subjective. but, “clearer” here is a conclusion of the algorithms based on the dataset. and why is “autonomising” less ‘clear’? by design, ‘clearness’ has to be interpreted based on its training dataset. begs another question, autonomising vs. automating/automate, which term is likely to appear more often in the dataset, and thus lends to the prediction of ‘clearness’? from my author’s point-of-view, PAIR’s suggestion is definitely not ‘clearer’ in representing what i intended for my readers. and, ‘autonomising’ is likely a relatively rare concept out there at the moment. to me, in this case, LLMs’ greatest limitation of being bounded by its dataset is somewhat revealed. asking a far stretch question, is the current conception of LLM/transformer going to lead to AGI? i think the answer is clear.

of cats and guardians of staircases

how many cats and guards of staircases have you observed in your life? no? not sure? read on …

The Ashram Cat (aka The Guru’s Cat)
An esteemed guru (spiritual teacher) is teaching his disciples, but an ashram cat constantly distracts the students by wandering around. To prevent the distraction, the guru orders his disciples to tie the cat to a post or tree during lessons or evening worship. This practice continues daily. Generations of gurus, disciples, and cats pass away, but the act of tying a cat during the lesson becomes a deeply ingrained, sacred tradition.

The Guardian of Staircase
John F. Barker in Roll Call tells the story that for more than twenty years, for no apparent reason, an attendant stood at the foot of the staircase leading to the House of Commons. At last someone checked and discovered that the job had been held in the attendant’s family for three generations. It seems it originated when the stairs were painted and the current attendant’s grandfather was assigned the task of warning people not to step on the wet paint. (source acknowledgement: www.lecturesbureau.gr)

time has changed, ubiquitous network connections and mobile devices and apps have arrived, and GenAI has descended. but things done yesterday are continued today, tomorrow, and probably the day after tmr. who, especially minions, dare question ‘traditions’ or remove guards from anywhere?

meaning-making, losing it?

kueh attended a meeting and heard his boss huat shared on his latest beloved AI tool called LnbkM, and how he used it to summarise the so many readings and articles on the internet. boss huat encouraged all at the meeting to do what he does.
as kueh listened, he scratched his head and felt confused. slipping his hand into his left pocket, he took out his recipe and flipped to some notes he jotted down just last week, when attending a talk by a professor How PL who shared the following (research-based) ideas abt human learning:

  • learning is about meaning-making. and reading is a means through which meaning-making takes place.
  • neuroscience (brain-based) research also suggested that cognitive functions may be lost if they are unused or used less — comprehension, analysis, synthesis to just name a few

what boss huat suggested was to let AI be the one to “read”, and humans only receiving the summary of “reading” by the AI. if so, who is doing the meaning-making? kueh wondered what would be the future of humans be like if they begin to lose the brain functions of meaning-making? if so, what does it mean to be humans anymore?