Programming Personality
Of course future robots will express individuality. They already do.
Robot Icon (2006). Design by Wikipedia contributor Biboq, who kindly donated this artwork to the public domain.
“Can a robot be too nice?”*
That’s the title and the question at the heart of Leon Neyfakh’s recent piece in The Boston Globe.
Most of the time when people talk about robots the focus is on what they’ll do for us (“Robot! Build me a Tesla and pour me a Guinness!”) or how intelligent they will become (“Robot! Translate Dante and stop scheming to destroy humanity!”).
But when it comes to what we’d prefer their personalities to be like, the answer seems to be: it depends.
One thing’s for sure though. Each robot will have a personality. Even if it were possible for their creators to eliminate all remnants of character and temperament from robot behavior (leaving aside that a lack of charisma is itself a personality trait) humans have a habit of anthropomorphizing everything we interact with anyway.
The question then becomes what is the “right” personality for each robot. As Neyfakh shows in his piece, there are all sorts of criteria to consider. When experimenters gave two robots distinct personalities — one extroverted and assertive, one quieter and more reserved even in its gestures — test subjects did not prefer either personality for its own sake. Instead, their reactions to each personality changed depending on the intended function of the robot: people liked their nurse robots to be assertive, but those same energetic qualities turned into drawbacks when the robot’s job was to be a security guard, which subjects just wanted to chill out already.
Should a robot be more of an extrovert or an introvert, or perhaps have some predetermined flexibility in these qualities? It makes sense that when building a robot, its personality should be dictated by its purpose. But as Neyfakh reports, there is a lowest common denominator for determining whether a robot has an “appropriate” personality: “[R]esearchers have begun to realize that [robots’] effectiveness depends on how easily we relate to them.” This “relatablity” is of course affected by numerous factors — our own personalities and our expectations of each robot being just a couple. And research shows that people often respond best to robots that behave most like themselves.
“What researchers are finding is that it’s not enough for a machine to have an agreeable personality—it needs the right personality. A robot designed to serve as a motivational exercise coach, for instance, might benefit from being more intense than a teacher-robot that plays chess with kids. A museum tour guide robot might need to be less indulgent than a personal assistant robot that’s supposed to help out around the house.” (Quoted from “Can a robot be too nice?” by Leon Neyfakh.)
Researchers are also finding that a robot’s likability and its effectiveness have an inverse relationship. An example Neyfakh discusses is a “playful” robot that people enjoyed interacting with yet didn’t take seriously enough to comply with.
Our perceived control over our robots is yet another factor in our appraisal of them; we don’t want to relinquish too much control to them, yet for a sophisticated robot to become helpful to us they’ll require a certain level of autonomy (or at least the appearance of it). Another smart observation comes from Astrid Weiss, a postdoctoral research fellow at Vienna University of Technology quoted by Neyfakh: it’s very likely we’ll want our robots to behave one way when we’re alone with them and quite another when other people are around.
Robotics continues to make great strides (see here and here). We might be far from finding the droids we’re looking for, but in many ways the question of how to design a robot’s personality is already being answered in humbler ways.
One striking part of all the examples above is the fact that a robot’s personality seems to be defined mainly by its words: not just which words it uses, but also how frequently it communicates and the tone it takes. And here it seems the brave new world is already upon us. For these nuances in communication are precisely the ones that define the personality of products we use already.
For instance, while the visual design of an app or website certainly influences our perceptions of its “personality,” it’s the language and tone of its interactions with us that truly bring that personality to life. The exception would be if the design is poor, which itself becomes an annoying dominant character trait and is the equivalent of a social faux pas. But the same considerations mentioned for robots above apply to the communication style of pretty much any user interaction: assertiveness vs. humility; informality vs. elegance; curtness vs. loquaciousness; frequent prompts vs. silence till spoken to. These differences aren’t so much reflected in color schemes, app screen animations, or design conventions as they are with the messages being conveyed.
Communication content and style determine the personalities we ascribe to all sorts of things in marketing and in language generally. These aren’t exactly new ideas: There’s a reason why a Lexus ad is different from a Volkswagen ad, or why The New Yorker strikes a different tone from BuzzFeed. Just as with our future robots, the audience or user largely determines what the appropriate voice and tone should be, and in turn that voice and tone will also attract a certain type of user.
The article about robot personalities also mentions a blast from the past: Microsoft Office’s now-retired mascot Clippy, the much-maligned paper clip character that used to pop up to answer users’ questions but ended up with a reputation for being as unhelpful as it was annoying.
This mention of Clippy nicely mirrored something more current. Recently I was lucky to be invited to join the Slack group for Quonders.com, a merry band of talented entrepreneurs, designers, developers, writers, and all-around interesting creative people from all over the world. I had only first heard of Slack about two weeks prior, and its popularity seemed to be growing exponentially.
I first tweeted about Slack's viral growth on Feb 9; here's what's happened since. Unprecedented. @SlackHQ pic.twitter.com/U5xUdF8BHJ
— Marc Andreessen (@pmarca) August 13, 2014
Not to turn this into an ad for Slack or anything, but it truly is an impressive piece of software. Although I haven’t used it in an enterprise situation yet (I’m a freelancer), I could easily see how it could make numerous other programs and services obsolete for businesses, from proprietary chat programs to project-management tools like Basecamp and potentially even company email (that one’s a bit less likely to disappear but you never know). But the real reason to bring up Slack here is because of Slackbot.
Slackbot is just what it sounds like: it’s Slack’s bot, the site’s automatic assistant that pops up to guide you as you set up your account, build your profile, and first use a feature. It also authenticates your info whenever you need to tie in some external service. Sounds sort of like Clippy, right?
Interactions with Slackbot, while somewhat Clippy-like, are useful. Its makers wisely declined to give it a cartoon form, so it’s only ever identified by its name. You can prevent messages, prompts, and pointers from repeating. The most frequent interaction with Slackbot is in the various group channels (kind of a cross between Twitter and a chat room, these channels are the main point of Slack). And another feature is that the admin for your Slack account can program it to do certain things there.
The admin’s decisions here help to determine the personality of their group’s Slackbot. While its preprogrammed messages and prompts are relentlessly breezy and affirmative (“Be cool. Also be warm.”), you can make it say anything you want in response to any trigger you want. It can be used for motivation, humor, absurdity, inspiration, vulgarity, nagging, or Boyz II Men lyrics (I’ve seen all of this in the first week). Today it even got to the point where our admin disabled Slackbot in certain channels. The personality of the group’s bot is changing before our eyes, and these little tweaks will probably slow to a crawl as the group continues to define exactly what everyone wants its personality to be. It’s a stripped down version of the issues roboticists are beginning to tangle with in very advanced ways.
This endlessly customizable part of Slack’s robot concierge is like a rudimentary version of the operating system in Spike Jonze’s movie Her (which I wrote about here, and which I still maintain will be seen in time as the finest film of 2013, a competitive year). In the beginning of Her, Theodore (Joaquin Phoenix) opens up his new artificially intelligent operating system — as in, opens up the box to install it. But very tellingly, before this first truly intelligent AI begins to run, he’s asked three all-important questions:
1. “Are you social or anti-social?”
2. “Would you like your OS to have a male or female voice?"
3. "How would you describe your relationship with your mother?"
Her gets a lot of things right and, leaving the psychological jokes of those specific questions aside, this particular prediction is on to something. Just as with Slackbot and its minimal capabilities, the personality of our most advanced robots will likely be tailored to our own personalities and expectations. As individuals we each respond to different cues, operate at different speeds, maintain different priorities and values, experience different variability of mood, and preoccupy our selves with different goals and desires and peeves.
Future programmers will design robots to have many potential personalities. Then our own personalities will trigger certain algorithms to dictate which traits are amplified or muted, case by case, continuously. That doesn’t really answer the underlying question: should we program our robots to be likable, or to be effective? How can we determine the right balance?
And if an artificial intelligence does emerge, what happens when it starts to subtly “program” us right back? What if it figures out that its language and personality can be used to manipulate our behavior and personalities, or facilitate its own independent ends? Who knows what questions it may have for us, or what the correct answers will be. Fingers crossed that it finds us relatable.
*It’s interesting that a corollary to this question — can a robot be too mean? — doesn’t need to be asked. Judging from science fiction alone, the answer is a resounding yes. But the threshold doesn’t have to be HAL-9000 or the T-1000. Whenever I take too long to enter my voicemail password, the Verizon lady repeats her request to “Please enter your password” verbatim — but her tone changes the second time. Her speech accelerates slightly. She seems angry for having to repeat herself. Someone made the decision to make her sound like that. It is slightly unnerving and one more reason to make leaving voicemails illegal.
Posted on August 21, 2014.