Before we start catastrophising about our future AI rulers, we should stop and appreciate the potential good that artificial intelligence can offer. The impact of AI on personality assessment and workplace communication will likely be positive—and extensive.
Recently on The Science of Personality Live, cohosts Ryne Sherman, PhD, chief science officer, and Blake Loepp, PR manager at Hogan Assessments, spoke with Michal Kosinski, PhD, associate professor in organisational behaviour at Stanford University, about the evolving technology of artificial intelligence.
Michal’s primary research focus is studying humans in a digital environment using cutting-edge computational methods, artificial intelligence, and big data. He was also behind the first press article warning against Cambridge Analytica, the privacy risks they exploited, and the efficiency of the methods they use.
Let’s look at how AI language models have evolved, what AI-assisted communication might become, how AI affects the future of personality assessment, and whether AI language models can be creative.
The Evolution of AI Language Models
Within the next few months (as of March 2023), AI language models will become exponentially more capable and ingenious. How does that explosive growth happen?
The approach to the development of AI language models started with chess. At first, software engineers and data scientists fed AI chess programs with archives of chess games played by humans. Then they equipped two AI programs with a virtual chessboard and instructions for how to play without any human intervention. “For the first few million games, those models were completely stupid,” Michal said, explaining that the rate of play was millions of games per second. “But soon, after a few hours, what emerged was this alien, superhuman software that could play chess at a level completely unachievable to human players.”
Software developers and artificial intelligence specialists used the same adaptive strategy to teach AI models how to craft language. Humans learn language through conversation, context, and correction. They make mistakes, learn, and make mistakes more rarely over time. “At some point they stop making mistakes and reach new levels of language. The same approach was used to train ChatGPT and similar models,” Michal said. The AI programs were given sentences with one word missing, failed millions of times to fill in the blank correctly, and then began to get it right. After a few million dollars of electricity and a few billion sentences, Michal quipped, the programs showed language mastery at an extraordinary level.
The AI revolution originated by teaching machines to solve problems using the same strategies that we use to teach humans: reinforcement and feedback. At first, the machines make obvious logical mistakes, but then they don’t. “The AI is responding to you as if as if it’s another person, which is the most incredible thing,” added Ryne. Because computers can exceed humans in logical ability, they are well suited to both playing chess and using language.
AI-Assisted Communication
“AI is a revolution comparable with the invention of written language,” Michal said. Manual writing gave humans the ability to communicate across time, sometimes thousands of years in the past. Knowing how to use a stylus, quill, or pencil was an essential method for communication before computers. Now, knowing how to use a keyboard is essential. Very shortly, the same fundamental change will happen with AI language models, Michal predicted.
“I think that GPT is potentially a new language for humanity to communicate at speed and convenience unheard of and impossible before,” Michal said.
An AI language model won’t just help humans write emails. It will craft the perfect message in the language that is most readily understood for the recipient. Here’s how.
Imagine that Michal wants to send Ryne an email. An AI language model knows and remembers all the events of each person’s life and has consumed every piece of digital communication each has produced. If Michal asked the AI to send a message to Ryne, he could make the request in very few words as if speaking to a good friend with intimate knowledge of him. But because the AI knew Ryne at that same level, it could “translate” Michal’s message into the perfect form for Ryne. The AI could use not only Ryne’s preferred language, such as English or Mandarin, but also a highly personalised form of that language unique to Ryne.
“In terms of the potential for translation, it knows the meaning of what you’re trying to say. It can translate that into a meaning that somebody else can understand in the way they understand,” Ryne said.
Another sense of AI-assisted communication is searching the internet. You wouldn’t ask the AI language model to find a website for you; you’d ask it the question you wanted to learn. It would search all websites and tailor its answer to any length or depth for your individual understanding of the world.
AI in Personality Assessment
Artificial intelligence is great at knowing and remembering what has been written, both words and data. For an AI language model to predict personality based on language, you’d need to first collect a lot of quality data. Michal pointed out that AI language models already understand language, of course, and can translate words into analysable numbers. “They already understand psychological concepts like personality,” he said. These models have read texts written by introverts and extroverts and could theoretically detect, based on a fragment of a text, whether a person is introverted or extroverted.
Ryne imagined whether personality assessments of the future would have questionnaires and self-reporting. “One of the big questions surrounding this topic is to what degree I’m a willing participant in this endeavor,” he said. The quality of publicly available information versus data gained from individuals intentionally taking a personality assessment will differ substantially. The AI-assisted analysis would likely be higher quality in the latter case. Voluntary participation would also address questions of ethics.
Using big data models to predict personality characteristics is not a new notion. It has positives: it can analyse millions of people in a minute, and it can match people with compatible work or suggest workplace training and development. It also has negatives: it can be used to invade privacy or manipulate people. “As with many other technologies, we focus on the risks of the technology itself, completely forgetting that the real risk is in the intentions of the users,” Michal responded.
Artificial Intelligence and Creativity
A new fronter in AI language models is innovation and creativity. Humanity has taken generations to refine speech and writing. Individual humans spend over a decade learning to speak and write. AI language models have mastered written communication in a few years at a high level that continues to increase.
Michal compared AI creativity to human creativity in that most of us learn and combine elements of what we know or have experienced in new, creative ways. Perceiving computers as nothing but glorified calculators is short-sighted thinking, he said. That computers can incorporate and build elements into new results makes them fundamentally creative too.
“Many other animals are also creative in their own ways that we do not always recognize because it’s just not our type of art. The same applies to computers,” Michal said. “They learn from us, they learn from each other, and they become extremely creative with what they are good at—and they’re increasingly good at anything we ask them to do.”
Note: When ChatGPT (March 23 version) was asked to provide a quote in fewer than 120 characters about how it learned language, this was its response: “Words woven, sounds spoken, meanings grasped. A symphony of curiosity, immersion, and connection. Language learned, world unlocked.”
Listen to this conversation in full, and find the whole library of episodes at The Science of Personality. Never miss a new episode by following us anywhere you get podcasts.