Millions of people participate in online peer-to-peer support sessions, yet there has been little prior research on systematic psychology-based evaluations of fine-grained peer-counselor behavior in relation to client satisfaction. This paper seeks to bridge this gap by mapping peer-counselor chat-messages to motivational interviewing (MI) techniques. We annotate 14,797 utterances from 734 chat conversations using 17 MI techniques and introduce four new interviewing codes such as chit-chat and inappropriate to account for the unique conversational patterns observed on online platforms. We automate the process of labeling peer-counselor responses to MI techniques by fine-tuning large domain-specific language models and then use these automated measures to investigate the behavior of the peer counselors via correlational studies. Specifically, we study the impact of MI techniques on the conversation ratings to investigate the techniques that predict clients' satisfaction with their counseling sessions. When counselors use techniques such as reflection and affirmation, clients are more satisfied. Examining volunteer counselors' change in usage of techniques suggest that counselors learn to use more introduction and open questions as they gain experience. This work provides a deeper understanding of the use of motivational interviewing techniques on peer-to-peer counselor platforms and sheds light on how to build better training programs for volunteer counselors on online platforms.
translated by 谷歌翻译
Digital platforms, including online forums and helplines, have emerged as avenues of support for caregivers suffering from postpartum mental health distress. Understanding support seekers' experiences as shared on these platforms could provide crucial insight into caregivers' needs during this vulnerable time. In the current work, we provide a descriptive analysis of the concerns, psychological states, and motivations shared by healthy and distressed postpartum support seekers on two digital platforms, a one-on-one digital helpline and a publicly available online forum. Using a combination of human annotations, dictionary models and unsupervised techniques, we find stark differences between the experiences of distressed and healthy mothers. Distressed mothers described interpersonal problems and a lack of support, with 8.60% - 14.56% reporting severe symptoms including suicidal ideation. In contrast, the majority of healthy mothers described childcare issues, such as questions about breastfeeding or sleeping, and reported no severe mental health concerns. Across the two digital platforms, we found that distressed mothers shared similar content. However, the patterns of speech and affect shared by distressed mothers differed between the helpline vs. the online forum, suggesting the design of these platforms may shape meaningful measures of their support-seeking experiences. Our results provide new insight into the experiences of caregivers suffering from postpartum mental health distress. We conclude by discussing methodological considerations for understanding content shared by support seekers and design considerations for the next generation of support tools for postpartum parents.
translated by 谷歌翻译
在数字治疗干预的背景下,例如互联网交付的认知行为治疗(ICBT)用于治疗抑郁和焦虑,广泛的研究表明,人类支持者或教练的参与如何协助接受治疗的人,改善用户参与治疗并导致更有效的健康结果而不是不受支持的干预措施。该研究旨在最大限度地提高这一人类支持的影响和结果,研究了通过AI和机器学习领域(ML)领域的最新进展提供的新机遇如何有助于有效地支持ICBT支持者的工作实践。本文报告了采访研究的详细调查结果,与15个ICBT支持者加深了解其现有的工作实践和信息需求,旨在有意义地向抑郁和焦虑治疗的背景下提供有用,可实现的ML申请。分析贡献(1)一组六个主题,总结了ICBT支持者在为其精神卫生客户提供有效,个性化反馈方面的策略和挑战;并回应这些学习,(2)对于ML方法如何帮助支持和解决挑战和信息需求,为每个主题提供具体机会。它依赖于在支持者LED客户审查实践中引入新的机器生成的数据见解的潜在社会,情感和务实含义的思考。
translated by 谷歌翻译
Incivility remains a major challenge for online discussion platforms, to such an extent that even conversations between well-intentioned users can often derail into uncivil behavior. Traditionally, platforms have relied on moderators to -- with or without algorithmic assistance -- take corrective actions such as removing comments or banning users. In this work we propose a complementary paradigm that directly empowers users by proactively enhancing their awareness about existing tension in the conversation they are engaging in and actively guides them as they are drafting their replies to avoid further escalation. As a proof of concept for this paradigm, we design an algorithmic tool that provides such proactive information directly to users, and conduct a user study in a popular discussion platform. Through a mixed methods approach combining surveys with a randomized controlled experiment, we uncover qualitative and quantitative insights regarding how the participants utilize and react to this information. Most participants report finding this proactive paradigm valuable, noting that it helps them to identify tension that they may have otherwise missed and prompts them to further reflect on their own replies and to revise them. These effects are corroborated by a comparison of how the participants draft their reply when our tool warns them that their conversation is at risk of derailing into uncivil behavior versus in a control condition where the tool is disabled. These preliminary findings highlight the potential of this user-centered paradigm and point to concrete directions for future implementations.
translated by 谷歌翻译
Covid-19在大流行的不同阶段对公众构成了不成比例的心理健康后果。我们使用一种计算方法来捕获引发在线社区对大流行的焦虑的特定方面,并研究这些方面如何随时间变化。首先,我们使用主题分析在R/covid19 \ _support的Reddit帖子样本($ n $ = 86)中确定了九个焦虑(SOA)。然后,我们通过在手动注释的样本($ n $ = 793)上训练Reddit用户的焦虑来自动将SOA标记在较大的年代样本中($ n $ = 6,535)。 9个SOA与最近开发的大流行焦虑测量量表中的项目保持一致。我们观察到,在大流行的前八个月,Reddit用户对健康风险的担忧仍然很高。尽管案件激增稍后发生,但这些担忧却大大减少了。通常,随着大流行的进展,用户的语言披露了SOA的强烈强度。但是,在本研究涵盖的整个期间,人们对心理健康的担忧和未来稳步增长。人们还倾向于使用更强烈的语言来描述心理健康问题,而不是健康风险或死亡问题。我们的结果表明,尽管Covid-19逐渐削弱,但由于适当的对策而逐渐削弱了作为健康威胁,但该在线小组的心理健康状况并不一定会改善。我们的系统为人口健康和流行病学学者奠定了基础,以及时检查引起大流行焦虑的方面。
translated by 谷歌翻译
最近十年表明,人们对机器人作为福祉教练的兴趣越来越大。但是,尚未提出针对机器人设计作为促进心理健康的教练的凝聚力和全面的准则。本文详细介绍了基于基于扎根理论方法的定性荟萃分析的设计和道德建议,该方法是通过三项以用户为中心的涉及机器人福祉教练的三个不同的以用户为中心进行的,即:(1)与参与性设计研究一起进行的。 11名参与者由两位潜在用户组成,他们与人类教练一起参加了简短的专注于解决方案的实践研究,以及不同学科的教练,(2)半结构化的个人访谈数据,这些数据来自20名参加积极心理学干预研究的参与者借助机器人福祉教练胡椒,(3)与3名积极心理学研究的参与者以及2名相关的福祉教练进行了一项参与式设计研究。在进行主题分析和定性荟萃分析之后,我们将收集到收敛性和不同主题的数据整理在一起,并从这些结果中提炼了一套设计准则和道德考虑。我们的发现可以在设计机器人心理福祉教练时考虑到关键方面的关键方面。
translated by 谷歌翻译
我们介绍了Sparrow,这是一个寻求信息的对话代理,与提示的语言模型基线相比,训练有素,更有帮助,正确和无害。我们使用从人类反馈中的强化学习来培训我们的模型,以帮助人类评估者判断代理人的行为。首先,为了使我们的代理人更有帮助和无害,我们将良好对话的要求分解为代理人应遵循的自然语言规则,并分别向评估者询问每个规则。我们证明,这种崩溃使我们能够收集对代理行为的更多针对性的人类判断,并允许更有效的规则条件奖励模型。其次,我们的代理商在收集对模型声明的偏好判决时提供了支持事实主张的来源的证据。对于事实问题,麻雀提供的证据支持了78%的时间。比基线比基线更享受麻雀,同时对人类的对抗性探测更具弹性,在探测时只有8%的时间违反了我们的规则。最后,我们进行了广泛的分析,表明尽管我们的模型学会遵守我们的规则,但它可以表现出分布偏见。
translated by 谷歌翻译
情绪分析中最突出的任务是为文本分配情绪,并了解情绪如何在语言中表现出来。自然语言处理的一个重要观察结果是,即使没有明确提及情感名称,也可以通过单独参考事件来隐式传达情绪。在心理学中,被称为评估理论的情感理论类别旨在解释事件与情感之间的联系。评估可以被形式化为变量,通过他们认为相关的事件的人们的认知评估来衡量认知评估。其中包括评估事件是否是新颖的,如果该人认为自己负责,是否与自己的目标以及许多其他人保持一致。这样的评估解释了哪些情绪是基于事件开发的,例如,新颖的情况会引起惊喜或不确定后果的人可能引起恐惧。我们在文本中分析了评估理论对情绪分析的适用性,目的是理解注释者是否可以可靠地重建评估概念,如果可以通过文本分类器预测,以及评估概念是否有助于识别情感类别。为了实现这一目标,我们通过要求人们发短信描述触发特定情绪并披露其评估的事件来编译语料库。然后,我们要求读者重建文本中的情感和评估。这种设置使我们能够衡量是否可以纯粹从文本中恢复情绪和评估,并为判断模型的绩效指标提供人体基准。我们将文本分类方法与人类注释者的比较表明,两者都可以可靠地检测出具有相似性能的情绪和评估。我们进一步表明,评估概念改善了文本中情绪的分类。
translated by 谷歌翻译
我们生活中情绪的重要性和普及性使得情感计算了一个非常重要和充满活力的工作。自动情感识别(AER)和情感分析的系统可以是巨大进展的促进者(例如,改善公共卫生和商业),而且还有巨大伤害的推动者(例如,用于抑制持不同政见者和操纵选民)。因此,情感计算社区必须积极地与其创作的道德后果搞。在本文中,我已经从AI伦理和情感认可文学中综合和组织信息,以提出与AER相关的五十个道德考虑因素。值得注意的是,纸张捏出了隐藏在如何框架的假设,并且在经常对数据,方法和评估的选择中的选择。特别关注在隐私和社会群体上的AER对AER的影响。沿途,关键建议是针对负责任的航空制作的。纸张的目标是促进和鼓励更加思考为什么自动化,如何自动化,以及如何在建立AER系统之前判断成功。此外,该纸张作为情感认可的有用介绍文件(补充调查文章)。
translated by 谷歌翻译
通过为患者启用远程医疗服务,远程医疗有助于促进医疗专业人员的机会。随着必要的技术基础设施的出现,这些服务已逐渐流行。自从Covid-19危机开始以来,远程医疗的好处就变得更加明显,因为人们在大流行期间倾向于亲自探望医生。在本文中,我们专注于促进医生和患者之间的聊天课程。我们注意到,随着对远程医疗服务的需求的增加,聊天体验的质量和效率可能至关重要。因此,我们为医学对话开发了一种智能的自动反应生成机制,该机制可帮助医生有效地对咨询请求做出反应,尤其是在繁忙的课程中。我们探索了9个月内收集的医生和患者之间的900,000多个匿名的历史在线信息。我们实施聚类算法,以确定医生最常见的响应,并相应地手动标记数据。然后,我们使用此预处理数据来训练机器学习算法以生成响应。所考虑的算法有两个步骤:过滤(即触发)模型,以滤除不可行的患者消息和一个响应发生器,以建议成功通过触发阶段的响应前3位医生响应。该方法为Precision@3提供了83.28 \%的精度,并显示出其参数的鲁棒性。
translated by 谷歌翻译
To address the widespread problem of uncivil behavior, many online discussion platforms employ human moderators to take action against objectionable content, such as removing it or placing sanctions on its authors. This reactive paradigm of taking action against already-posted antisocial content is currently the most common form of moderation, and has accordingly underpinned many recent efforts at introducing automation into the moderation process. Comparatively less work has been done to understand other moderation paradigms -- such as proactively discouraging the emergence of antisocial behavior rather than reacting to it -- and the role algorithmic support can play in these paradigms. In this work, we investigate such a proactive framework for moderation in a case study of a collaborative setting: Wikipedia Talk Pages. We employ a mixed methods approach, combining qualitative and design components for a holistic analysis. Through interviews with moderators, we find that despite a lack of technical and social support, moderators already engage in a number of proactive moderation behaviors, such as preemptively intervening in conversations to keep them on track. Further, we explore how automation could assist with this existing proactive moderation workflow by building a prototype tool, presenting it to moderators, and examining how the assistance it provides might fit into their workflow. The resulting feedback uncovers both strengths and drawbacks of the prototype tool and suggests concrete steps towards further developing such assisting technology so it can most effectively support moderators in their existing proactive moderation workflow.
translated by 谷歌翻译
短片已成为年轻一代使用的领先媒体之一,以便在线表达自己,从而塑造在线文化中的驱动力。在这方面,Tiktok已成为往往首先发布病毒视频的平台。在本文中,我们研究了在Tiktok上发布的短片内容有助于他们的病毒。我们应用一种混合方法方法来开发码本并识别重要的病毒功能。我们这样做是如此vis- \'a-vis三个研究假设;即:1)视频内容,2)Tiktok的推荐算法,以及3)视频创建者的普及有助于病毒性。我们收集并标记400个Tiktok视频和火车分类器的数据集,以帮助我们确定最多影响景象的功能。虽然追随者的数量是最强大的预测因子,但特写和中射尺度也起到重要作用。因此视频的寿命,文本的存在以及观点。我们的研究突出了与非病毒Tiktok视频区分病毒的特征,奠定了制定额外方法来创建更多聘用的在线内容,并主动地确定可能达到大量受众的风险内容。
translated by 谷歌翻译
Empathy is a vital factor that contributes to mutual understanding, and joint problem-solving. In recent years, a growing number of studies have recognized the benefits of empathy and started to incorporate empathy in conversational systems. We refer to this topic as empathetic conversational systems. To identify the critical gaps and future opportunities in this topic, this paper examines this rapidly growing field using five review dimensions: (i) conceptual empathy models and frameworks, (ii) adopted empathy-related concepts, (iii) datasets and algorithmic techniques developed, (iv) evaluation strategies, and (v) state-of-the-art approaches. The findings show that most studies have centered on the use of the EMPATHETICDIALOGUES dataset, and the text-based modality dominates research in this field. Studies mainly focused on extracting features from the messages of the users and the conversational systems, with minimal emphasis on user modeling and profiling. Notably, studies that have incorporated emotion causes, external knowledge, and affect matching in the response generation models, have obtained significantly better results. For implementation in diverse real-world settings, we recommend that future studies should address key gaps in areas of detecting and authenticating emotions at the entity level, handling multimodal inputs, displaying more nuanced empathetic behaviors, and encompassing additional dialogue system features.
translated by 谷歌翻译
In this chapter, we review and discuss the transformation of AI technology in HCI/UX work and assess how AI technology will change how we do the work. We first discuss how AI can be used to enhance the result of user research and design evaluation. We then discuss how AI technology can be used to enhance HCI/UX design. Finally, we discuss how AI-enabled capabilities can improve UX when users interact with computing systems, applications, and services.
translated by 谷歌翻译
自我披露的心理健康诊断是在没有临床措施的情况下用作心理健康状况的基础真理注释,这是过去十年来大多数心理健康语言计算研究背后的结论。但是,精神病是动态的。先前的抑郁诊断可能不再表明个人的心理健康,无论是由于治疗还是其他缓解因素。我们问:随着时间的推移,心理健康诊断的自我诊断的自我限制在多大程度上?我们分析了五年前在社交媒体上披露抑郁症诊断的个人的最新活动,反过来又对社交媒体上心理健康状况的表现有了新的了解。我们还提供了扩展的证据,证明使用自被诊断的数据集中存在与人格相关的偏差。我们的发现激发了三个实用建议,用于改善使用自lif诊诊断策划的心理健康数据集:1)注释诊断日期和精神病合并症; 2)使用倾向得分匹配的样本对照组; 3)识别和删除选择偏差引入的虚假相关性。
translated by 谷歌翻译
社交媒体平台上的滥用内容的增长增加对在线用户的负面影响。对女同性恋,同性恋者,跨性别或双性恋者的恐惧,不喜欢,不适或不疑虑被定义为同性恋/转铁症。同性恋/翻译语音是一种令人反感的语言,可以总结为针对LGBT +人的仇恨语音,近年来越来越受到兴趣。在线同性恋恐惧症/ Transphobobia是一个严重的社会问题,可以使网上平台与LGBT +人有毒和不受欢迎,同时还试图消除平等,多样性和包容性。我们为在线同性恋和转鸟以及专家标记的数据集提供了新的分类分类,这将允许自动识别出具有同种异体/传递内容的数据集。我们受过教育的注释器并以综合的注释规则向他们提供,因为这是一个敏感的问题,我们以前发现未受训练的众包注释者因文化和其他偏见而诊断倡导性的群体。数据集包含15,141个注释的多语言评论。本文介绍了构建数据集,数据的定性分析和注册间协议的过程。此外,我们为数据集创建基线模型。据我们所知,我们的数据集是第一个已创建的数据集。警告:本文含有明确的同性恋,转基因症,刻板印象的明确陈述,这可能对某些读者令人痛苦。
translated by 谷歌翻译
以任务为导向的对话系统(TODS)继续升高,因为各种行业发现有效地利用其能力,节省时间和金钱。然而,即使是最先进的TOD尚未达到其全部潜力。TOD通常具有主要设计专注于完成手头的任务,因此任务分辨率的度量应优先考虑。可能会忽略可能指向对话的其他可能指向成功或其他方面的会话质量属性。这可能导致人类和对话系统之间的相互作用,让用户不满意或沮丧。本文探讨了对话系统的评价框架的文献,以及对话系统中的会话质量属性的作用,看起来,如何以及在与对话系统的性能相关的情况下,如何相关。
translated by 谷歌翻译
Any organization needs to improve their products, services, and processes. In this context, engaging with customers and understanding their journey is essential. Organizations have leveraged various techniques and technologies to support customer engagement, from call centres to chatbots and virtual agents. Recently, these systems have used Machine Learning (ML) and Natural Language Processing (NLP) to analyze large volumes of customer feedback and engagement data. The goal is to understand customers in context and provide meaningful answers across various channels. Despite multiple advances in Conversational Artificial Intelligence (AI) and Recommender Systems (RS), it is still challenging to understand the intent behind customer questions during the customer journey. To address this challenge, in this paper, we study and analyze the recent work in Conversational Recommender Systems (CRS) in general and, more specifically, in chatbot-based CRS. We introduce a pipeline to contextualize the input utterances in conversations. We then take the next step towards leveraging reverse feature engineering to link the contextualized input and learning model to support intent recognition. Since performance evaluation is achieved based on different ML models, we use transformer base models to evaluate the proposed approach using a labelled dialogue dataset (MSDialogue) of question-answering interactions between information seekers and answer providers.
translated by 谷歌翻译
了解用户对话中的毒性无疑是一个重要问题。正如在以前的工作中所说的那样,解决“隐秘”或隐含毒性案件特别困难,需要上下文。以前很少有研究已经分析了会话语境在人类感知或自动检测模型中的影响。我们深入探讨这两个方向。我们首先分析现有的上下文数据集,并得出结论,人类的毒性标记一般受到对话结构,极性和主题的影响。然后,我们建议通过引入(a)神经架构来将这些发现带入计算检测模型中,以了解会话结构的语境毒性检测,以及(b)可以帮助模拟语境毒性检测的数据增强策略。我们的结果表明了了解谈话结构的神经架构的令人鼓舞的潜力。我们还表明,这些模型可以从合成数据中受益,尤其是在社交媒体领域。
translated by 谷歌翻译
讽刺可以被定义为说或写讽刺与一个人真正想表达的相反,通常是为了侮辱,刺激或娱乐某人。由于文本数据中讽刺性的性质晦涩难懂,因此检测到情感分析研究社区的困难和非常感兴趣。尽管讽刺检测的研究跨越了十多年,但最近已经取得了一些重大进步,包括在多模式环境中采用了无监督的预训练的预训练的变压器,并整合了环境以识别讽刺。在这项研究中,我们旨在简要概述英语计算讽刺研究的最新进步和趋势。我们描述了与讽刺有关的相关数据集,方法,趋势,问题,挑战和任务,这些数据集,趋势,问题,挑战和任务是无法检测到的。我们的研究提供了讽刺数据集,讽刺特征及其提取方法以及各种方法的性能分析,这些表可以帮助相关领域的研究人员了解当前的讽刺检测中最新实践。
translated by 谷歌翻译