Publications & Pre-prints
2025
- Generative AI and Perceptual Harms: Who’s Suspected of using LLMs?Kowe Kadoma, Danaé Metaxa, and Mor NaamanIn Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems 2025
Large language models (LLMs) are increasingly integrated into a variety of writing tasks. While these tools can help people by generating ideas or producing higher quality work, like many other AI tools, they may risk causing a variety of harms, potentially disproportionately burdening historically marginalized groups. In this work, we introduce and evaluate perceptual harms, a term for the harms caused to users when others perceive or suspect them of using AI. We examined perceptual harms in three online experiments, each of which entailed participants evaluating write-ups from mock freelance writers. We asked participants to state whether they suspected the freelancers of using AI, to rank the quality of their writing, and to evaluate whether they should be hired. We found some support for perceptual harms against certain demographic groups. At the same time, perceptions of AI use negatively impacted writing evaluations and hiring outcomes across the board.
- Why So Serious? Exploring Timely Humorous Comments in AAC Through AI-Powered InterfacesTobias M Weinberg, Kowe Kadoma, Ricardo E. Gonzalez Penuela, and 2 more authorsIn Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems 2025
People with disabilities that affect their speech may use speech-generating devices (SGD), commonly referred to as Augmentative and Alternative Communication (AAC) technology. This technology enables practical conversation; however, delivering expressive and timely comments remains challenging. This paper explores how to extend AAC technology to support a subset of humorous expressions: delivering timely humorous comments -witty remarks- through AI-powered interfaces. To understand the role of humor in AAC and the challenges and experiences of delivering humor with AAC, we conducted seven qualitative interviews with AAC users. Based on these insights and the lead author’s firsthand experience as an AAC user, we designed four AI-powered interfaces to assist in delivering well-timed humorous comments during ongoing conversations. Our user study with five AAC users found that when timing is critical (e.g., delivering a humorous comment), AAC users are willing to trade agency for efficiency—contrasting prior research where they hesitated to delegate decision-making to AI. We conclude by discussing the trade-off between agency and efficiency in AI-powered interfaces, how AI can shape user intentions, and offer design recommendations for AI-powered AAC interfaces. See our project and demo at: tobiwg.github.io/research/why_so_serious
2024
- The Role of Inclusion, Control, and Ownership in Workplace AI-Mediated CommunicationKowe Kadoma, Marianne Aubin Le Quere, Xiyu Jenny Fu, and 3 more authorsIn Proceedings of the CHI Conference on Human Factors in Computing Systems 2024
Given large language models’ (LLMs) increasing integration into workplace software, it is important to examine how biases in the models may impact workers. For example, stylistic biases in the language suggested by LLMs may cause feelings of alienation and result in increased labor for individuals or groups whose style does not match. We examine how such writer-style bias impacts inclusion, control, and ownership over the work when co-writing with LLMs. In an online experiment, participants wrote hypothetical job promotion requests using either hesitant or self-assured auto-complete suggestions from an LLM and reported their subsequent perceptions. We found that the style of the AI model did not impact perceived inclusion. However, individuals with higher perceived inclusion did perceive greater agency and ownership, an effect more strongly impacting participants of minoritized genders. Feelings of inclusion mitigated a loss of control and agency when accepting more AI suggestions.
- Estimating Exposure to Information on Social NetworksBuddhika Nettasinghe, Kowe Kadoma, Mor Naaman, and 1 more authorTrans. Soc. Comput. Aug 2024
Estimating exposure to information on a social network is a problem with important consequences for our society. The exposure estimation problem involves finding the fraction of people on the network who have been exposed to a piece of information (e.g., a URL of a news article on Facebook, a hashtag on Twitter). The exact value of exposure to a piece of information is determined by two features: the structure of the underlying social network and the set of people who shared the piece of information. Often, both features are not publicly available (i.e., access to the two features is limited only to the internal administrators of the platform) and are difficult to estimate from data. As a solution, we propose two methods to estimate the exposure to a piece of information in an unbiased manner: a vanilla method which is based on sampling the network uniformly and a method which non-uniformly samples the network motivated by the Friendship Paradox. We provide theoretical results which characterize the conditions (in terms of properties of the network and the piece of information) under which one method outperforms the other. Further, we outline extensions of the proposed methods to dynamic information cascades (where the exposure needs to be tracked in real-time). We demonstrate the practical feasibility of the proposed methods via experiments on multiple synthetic and real-world datasets.