Welcome In the News Workshops AZ News
Welcome Language in the news
Language workshops
All articles
News & events

CARM

 The Evolution of the Conversation Analytic Role-play Method

The Conversation Analytic Role-play Method, or CARM, is an approach to communication training based on conversation analytic (CA) research. CA studies recordings of real-time interaction and the activities that comprise it; the way those activities are designed, and how different designs lead to different outcomes. Focusing on both spoken and embodied (e.g., gaze, gesture) resources, CA investigates the organizational structure and sequence of different phases of talk, such as openings or closings, as well as the organization and design of social actions, such as advising, offering or questioning. In particular, applied CA research identifies the problems and roadblocks that occur in interaction, as well as the techniques and strategies that best resolve and overcome them.

The research findings that underpin CARM workshops were generated in an ESRC-funded study of neighbour disputes (e.g., Stokoe & Edwards, 2009). I approached community mediation services to ask if they might record encounters between mediators and clients. Although some mediators agreed, many did not. Instead, services offered to record their initial inquiry calls into their offices. For mediators, these calls were not 'mediation proper', and so they were less concerned about a researcher studying them. From our perspective, the data were ideal for a study of neighbour disputes because they comprised a naturally occurring survey of the causes of disputes, as well as an opportunity to examine the ways that neighbour complaints were formulated (e.g., Edwards, 2005; Stokoe, 2009). Towards the end of the project, the focus turned away from analysing the design of neighbour complaints and towards the organization of initial inquiries themselves, and, in particular, whether or not callers became clients of community mediation organizations by the end of their encounter with a mediator. Given that the services are generally free, it was surprising that many callers were not 'converted' into clients.

As conversation analysts, we know that our data provides the basis for naturally-occurring experiments which generate evidence about the effectiveness or otherwise of communicative practices. In my analysis of initial inquiries to mediation services, I found that certain types of mediator question were more likely than others to generate a positive response from callers (Stokoe, 2013a). I also categorized ways of explaining mediation as a service as (in)effective, demonstrated by callers' responses. Such endogenous measurements provided evidence of the outcomes achieved by 'interactional nudges'. By identifying practices that led to successful and unsuccessful outcomes, I generated research-based information for mediators to better engage callers and convert them into clients (e.g., Edwards & Stokoe, 2007; Stokoe, 2013a).

This research fed into the development of CARM which, using anonymized recordings presented synchronously with technical transcripts, takes trainees through the live development of actual service encounters, stopping to discuss, then explain, the practices that work, or do not work (Stokoe, 2011). In workshops, animation software is used to play the audio and transcript synchronously. This means that workshop participants live through conversations without knowing what is coming next, and then 'role-play' what they might do next to handle the situation. A workshop is developed by selecting extracts from research findings about a particular practice (e.g., explaining a service). CARM provides participants with a unique opportunity to examine communicative practices in forensic detail, and to understand what works from a rigorous empirical basis. Since 2010, over 200 CARM workshops have been delivered to local, regional and national mediation and alternative dispute resolution organizations in the UK and USA. I also developed similar workshops for police officers, based on research with Derek Edwards on investigative interviews (e.g., Edwards & Stokoe, 2011). CARM is currently crossing into other sectors (e.g., medicine, commercial sales) and is being used by other conversation analysts who I have trained to use the technology to produce CARM's distinctive line-by-line methodology.

 

What problem does CARM address?

CARM's use of actual interaction stands in sharp contrast to the communication training world at large, which uses, almost exclusively, role-play and simulation to both train and assess people's skills. Simulation methods involve people-in-training, from call-centre workers and corporate business managers to doctors and police officers, interacting with actors or other simulated interlocutors. The guiding assumption of such encounters is that they mimic sufficiently 'real life' interactional events to be effective in two ways: to practice the conversational moves that would comprise an actual encounter, and to assess what participants do in an actual encounter.

However, as I have discussed elsewhere (Stokoe, 2013b), the authenticity of role-played interaction is assumed but untested. Indeed, I have shown that there are some striking differences between simulated and actual encounters. For instance, in my research comparing real police investigative interviews with simulations, I found that paid actors playing the part of suspects often did things that real suspects did not — because they could; because the consequences for them were not as they would be for a real suspect in a real interview. I also found that officers in training did things that they did not do in actual encounters. In simulations, actions were unpacked more elaborately, exaggeratedly, or explicitly, ensuring that particular features of their talk were made interactionally visible. A useful analogy might be taking a driving test and showing the examiner that 'I am looking in the rear-view mirror' by gesturing one's head unambiguously towards it. In other words, people may be assessed highly because they do things in role-play that they do not do in real encounters, and that these 'assessable' things are not necessarily that effective in real situations.

 

The CARM approach: The landscape of interaction and racetracks, projects and slots

CARM workshops, like any translation of academic work to other professional settings, must engage non-CA audiences. I use several analogies and phrases that work well to engage such audiences such that they can quickly understand what CA research is and provides. One useful analogy is to think about conversational encounters as having a distinct landscape, like a racetrack, that has distinct phases and hurdles. We start a conversation, or race, with a recipient or recipients, and, along the way, complete various projects (greetings, openings, reason for call, and so on). CA researchers study multiple instances of the same type of interaction and, in so doing, discern the architecture of the racetrack and its overall organization and structure. People may anticipate and avoid hurdles or run into them, knocking the interaction off course. So, for example, telephone calls between an organization and a client or potential client may involve projects such as opening the call, explaining problems, offering services, making appointments, and closing the call. Analysis focuses on how those projects are designed, as well as the slots that open up for both parties to fill with a variety of different things (see Sacks, 1992). We can see how different designs lead to different conversational trajectories or outcomes, either avoiding or falling into the racetrack's hurdles.

CARM helps people to understand the landscape of their particular workplace or professional racetrack. Because a great deal of research on communication does not start where CA starts — with an analysis of people actually doing their job — these racetracks are often completely unstudied. CARM works by turning analyses of racetracks into evidence-based training materials. Participants are exposed, often uniquely in their careers, to the actual activities of anonymized colleagues doing the job that participants themselves do. As one participant commented in feedback, "The fact that it was 'real', as opposed to role-play was a relief. It was so much better, and more interesting and motivating, to deal with reality as opposed to made-up scenarios and acting."

 

From caller to client: Selling an unknown service

One 'project on the racetrack' in initial calls between mediators and potential clients is for mediators to explain what their service involves. Analysis revealed that this information can be packaged in different ways. Inside the 'naturally occurring experiments' of conversational data, we can see the effectiveness of different explanations by examining how callers respond to them. The extract below is typical of one way that mediators explain the process. Before seeing this explanation, I often ask mediators to produce an explanation that they might use on their organization's website (numbers in brackets represent length of pauses in seconds):

Extract 1a: HC-7
1      M:      We wouldn't take si:des, we wouldn't- (0.7) try an' decide who's right
2                or wrong but would- .hh would try to help you both um:: (0.8)
3                sort out uh: the differences between: (0.2) between you.

I found that explanations like this one, that include phrases like "we don't take sides", "we don't decide who's right or wrong", as well as other things like "we don't have any authority" or "we don't offer solutions", co-occurred with the caller saying 'no' to mediation. However, these sorts of phrases are regularly used by mediators to explain the process, including on their websites. After discussing M's explanation, the caller's response is revealed.

Extract 1b: HC-7
1      M:      We wouldn't take si:des, we wouldn't- (0.7) try an' decide who's right
2                or wrong but would- .hh would try to help you both um:: (0.8)
3                sort out uh: the differences between: (0.2) between you.
4                           (2.5)
5      C:       Well I-hh (1.2) to be qui:te honest I don't think she'd cooperate.

Because callers have phoned up with a one-sided problem — it is the other party's fault — the offer of a two-sided solution is generally unattractive. Callers take opportunities to negatively characterize the other party, and the account provided at line 5: that the other party is 'the kind of person who won't mediate', was commonly used in callers' rejections of mediation as a course of action. Explanations of mediation that focused on process and procedure, and did not include phrases like "we don't take sides", were more effective in keeping callers engaged and more likely to agree to mediate. Throughout the workshop, participants learn a lot about CA's technical transcription, and know that the silence at line 4, when it is played, as indicative of upcoming bad news. What they see is that, at line 4, what does not happen is an enthusiastic response to M's explanation. They then discuss what they might do in response to line 5.

Extract 1c: HC-7
1      M:      We wouldn't take si:des, we wouldn't- (0.7) try an' decide who's right
2                or wrong but would- .hh would try to help you both um:: (0.8)
3                sort out uh: the differences between: (0.2) between you.
4                          (2.5)
5      C:       Well I-hh (1.2) to be qui:te honest I don't think she'd cooperate.
6                          (0.4)
7      M:       N:o:.

Some mediators have effective ways to handle this most common route out of mediation. However, this mediator does not: he does not know his racetrack. He does not know that this way of formulating mediation is likely to generate such a response, and does not have a strategy for handling it. In CARM workshops, I present a number of explanations that do not work, and a number that do. Mediators are able to see directly how to engage prospective clients from the evidence playing out in front of them.

 

From CARM-Talk to CARM-Text

Until recently, CARM focused on recording spoken encounters and turning analytic findings into training for practitioners. But I have also developed 'CARM-Text', which takes analyses of spoken interaction and applies them to an organization's written communication with its clients and users (e.g., in websites, posters, leaflets, and letters). Text deals with the same hurdles as talk, but in different ways. Talk involves dealing with hurdles as they arise, in an unfolding interaction. Texts are constructed to pre-empt and avoid those hurdles in the first place; smoothing out the racetrack before any real-time dialogue takes place. If we know what works and does not work in explanations of mediation from the 'naturally-occurring experiments' provided by the sorts of CA research discussed so far, we can translate those findings to the realm of written communication, helping to ensure that any explanations of mediation on an organization's website, or posters and so on, do not include the phrases that we know do not appeal to prospective clients. CARM-Text has been used by several mediation services; I ran the first CARM-Text workshop for the national collaborative law organization Resolution earlier in 2014. I also consulted for the UK Ministry of Justice's Communication and Information Directorate who adopted CARM-based wording in their new marketing of family mediation across their website, leaflets, posters and animation .

 

Concluding Remarks

In the last few years, CARM's reach and impact has proliferated. CARM workshops were accredited by the UK College of Mediators, meaning that participants are awarded 'Continuing Professional Development' points ('CPD') which practitioners must accrue each year. The route to CPD is one way of developing wider audiences and demand for training interventions, as well as to generating interest in CA research and changing the culture of communication training (see Meagher, 2013, on the impact of CARM). Furthermore, CARM has recently been commercialized as a not-for-profit social enterprise. Overall, CARM and, now, CARM-Text, has had a significant impact on the mediation field, helping services that themselves help people in conflict.

For more information, visit www.carmtraining.org.

 

References

Edwards, D. (2005). Moaning, whinging and laughing: The subjective side of complaints. Discourse Studies, 7(1), 5-29.

Edwards, D., & Stokoe, E. (2011). "You don't have to answer": Lawyers' contributions in police interrogations of suspects. Research on Language and Social Interaction, 44(1), 21-43.

Edwards, D., & Stokoe, E. (2007). Self-help in calls for help with problem neighbours. Research on Language and Social Interaction, 40(1), 9-32.

Meagher, L.R. (2013). Research impact on practice: Case study analysis. Swindon: Economic and Social Research Council.

Sacks, H. (1992). Lectures on conversation, Volume 1. Oxford, UK: Blackwell.

Stokoe, E. (2013a). Overcoming barriers to mediation in intake calls to services: Research-based strategies for mediators. Negotiation Journal, 29(3), 289-314.

Stokoe, E. (2013b). The (in)authenticity of simulated talk: Comparing role-played and actual conversation and the implications for communication training. Research on Language and Social Interaction, 46(2), 1-21.

Stokoe, E. (2011). Simulated interaction and communication skills training: The 'Conversation Analytic Role-play Method' (pp. 119-139). In C. Antaki (Ed.), Applied conversation analysis: Changing institutional practices. Basingstoke: Palgrave Macmillan.

Stokoe, E. (2009). Doing actions with identity categories: Complaints and denials in neighbour disputes. Text and Talk, 29(1), 75-97.

Stokoe, E., & Edwards, D. (2009). Accomplishing social action with identity categories: Mediating neighbour complaints (pp. 95-115). In M. Wetherell (Ed.), Theorizing identities and social action. London: Sage

 

 

https://hk.lionsmanegummiesfn.com Lion s Mane 在香港買. Now lion Mane.

Share this post