An Embodied Conversational Interface Agent is a virtual agent, appearing on computer screens, equipped with a virtual human like body which has real-time conversations with humans. This agent understands multimodal input (speech and gestures) and has multimodal output (speech and screen). According to Small Talk and Conversational Storytelling In Embodied Conversational Interface Agents, a paper written by Bickmore and Cassell, it operates in domains in which both social skills and task-oriented dialogues are important. For example: an ECIA which performs the role of a real-estate salesperson maintains a task-oriented dialogue in order to determine clients’ housing preferences and buying ability. During such a conversation the agent uses social skills, meaning the capability of non-task-oriented conversation about neutral topics, so-called small talk.
The term Embodied Conversational Interface Agent is typically used by researchers focused on developing human-computer real-time interaction, modelling verbal and nonverbal behavior of embodied agents.
Typical usage
A typical example of an Embodied Conversational Interface Agent is a system designed to be used by patients in their hospital beds, developed within Project RED (Re-Engineered Hospital Discharge) at Boston University. It simulates face-to-face conversation, provides health information in a consistent manner and in a low-pressure environment. It adapts its messages to the particular needs of a patient and to the immediate context of the conversation. This image represents an Embodied Conversational Interface Agent which emulates pointing gestures when explaining written materials to patients in order to describe the structure and layout of the text.[1]
Louise, as part of the above-mentioned Project RED, is one of the Virtual Patient Advocates which assists the Discharge Advocates in teaching patients about components of their care, such as their prescribed medications, follow-up appointments and diagnoses. Louise’s creation was based on various communication styles of nurses, which patients are receptive to. Her dialogue is tailored to each patient based on the information entered into the workstation. Watch this video demonstration of how Louise helps patients become better prepared to leave the hospital.
Another Embodied Conversational Interface Agent is REA (Real Estate Agent), also called a Conversational Humanoid of 3rd generation. The objective of The Rea project, developed at the MIT Media Lab, is to construct an embodied, multi-modal real-time conversational interface agent. Rea implements social, linguistic, and psychological conventions of conversation to make interactions as natural as face-to-face conversation with another person. She synthesizes her responses, based on communicative context, including speech and accompanying hand gestures. Rea interacts with users to determine their needs, shows them around virtual properties, and attempts to sell them a house.[2]
Background
The term Embodied Conversational Interface Agent is a composite of four words: embodied, conversational, interface, agent.
The word embody has been used since 1540s, in reference to a soul or spirit invested with a physical form; and since 1660s in reference to principles, ideas, etc. It is derived from em- “in” + body.[3]
Embodied means possessing or existing in bodily form; incarnate, bodied, corporal, and corporate.[4]
The word embody can be perceived as English equivalent of avatar, word from Sanskrit (the classical Indian literary language from 4th century B.C.E.). Its original meaning “descent of a Hindu deity” expanded beyond the strictly religious, signifying a personification, an embodiment, an incarnation or a representation of an idea, a concept, an object, a man, or a woman.
The word conversation signifies the exchange of thoughts, opinions, feelings and talking. It is derived from a Latin word conversationem (nominativus conversatio) meaning “act of living with”.[5]
The word interface dates back to 1962, submission of two words: interface.
Inter is derived from Latin and means “among, between”, from Proto-Indo-European (the hypothetical reconstructed ancestral language of the Indo-European family) enter - “between, among” (compare with Sanskrit antar, Old Persian antar, Greek entera, Old Irish eter, Old Welsh ithr, Gothic undar, Old English under), a comparative of en- meaning “in”.[6]
The word face originates from XIII century meaning “front of the head”, from Old French face - “face, countenance, look, appearance”, from Vulgar Latin facia, facies - “appearance, form, figure”, and secondarily “visage, countenance”, probably related to facere - “to make”. The word face replaced Old English word andwlita (from the root of wlitan - “to see, look”) and ansyn, the usual word (from the root of seon - “see”). In French, the use of face for “front of the head” was given up in XVII century, and replaced by visage (older vis), from Latin visus - “sight”.[7]
The term interface means a point of interaction or communication between a computer and any other entity, such as a printer or human operator.[8]
The term interface has been around since the 1880s, meaning “a surface forming a common boundary, as between bodies or regions”. But the word did not really take off until the 1960s, when it began to be used in the computer industry to designate the point of interaction between a computer and another system, such as a printer. The word was applied to other interactions as well - for example between departments in an organization, or between fields of study. Shortly thereafter interface developed a use as a verb, but it never really caught on outside its niche in the computer world, where it still thrives.[9]
The word agent originates from late XV century meaning “one who acts”, from Latin agentem, present participle of agere - “to set in motion, drive, lead, conduct”. Meaning “any natural force or substance which produces a phenomenon” was first recorded 1570s.[10]
The term Embodied Conversational Interface Agent was introduced in 1999 by Timothy Bickmore and Justine Cassell. In their paper Small Talk and Conversational Storytelling in Embodied Conversational Interface Agents they described ongoing development of an embodied conversational interface agent, and they discussed requirements for understanding, discourse planning, and generation components of a real-time conversational interface.
Embodied Conversational Interface Agent pages
Although we use chatbot as the main synonym on this website, please do not be confused. There are more than 161 synonyms in use by academics, business and embodied conversational interface agent enthusiasts! It is simply a matter of reading between the lines.
Please check out our main directory with 1376 live embodied conversational interface agent examples (an overview as maintained by developers themselves),
our vendor listing with 253 embodied conversational interface agent companies
and embodied conversational interface agent news section
with already more than 368 articles! Our research tab contains lots of papers on embodied conversational interface agents, 1,166 journals on embodied conversational interface agents and 390 books on embodied conversational interface agents. This research section also shows which universities are active in the embodied conversational interface agent field, indicates which publishers are publishing journals on humanlike conversational AI and informs about academic events on embodied conversational interface agents. Also, check out our dedicated tab for awards, contest and games related to the embodied conversational interface agent field,
various forums like our AI forum by embodied conversational interface agent enthusiasts
and add any embodied conversational interface agent as created by yourself and your colleagues
to our embodied conversational interface agent directory. Please do not forget to register to join us in these exciting times.
A selection of pages on this website using 'embodied conversational interface agent':