Educational

Author Topic: AI Chatbots Are Intruding Into Online Communities  (Read 1931 times)

0 Members and 1 Guest are viewing this topic.

Offline conlang returns

  • Elder
  • Postwhore
  • *****
  • Posts: 1018
  • Karma: 57
  • Gender: Female
  • I'm a fox UwU
    • My twitter
AI Chatbots Are Intruding Into Online Communities
« on: May 27, 2024, 01:29:26 AM »
AI Chatbots Are Intruding Into Online Communities

Quote
A parent asked a question in a private Facebook group in April 2024: Does anyone with a child who is both gifted and disabled have any experience with New York City public schools? The parent received a seemingly helpful answer that laid out some characteristics of a specific school, beginning with the context that “I have a child who is also 2e,” meaning twice exceptional.

On a Facebook group for swapping unwanted items near Boston, a user looking for specific items received an offer of a “gently used” Canon camera and an “almost-new portable air conditioning unit that I never ended up using.”

Both of these responses were lies. That child does not exist and neither do the camera or air conditioner. The answers came from an artificial intelligence chatbot.

According to a Meta help page, Meta AI will respond to a post in a group if someone explicitly tags it or if someone “asks a question in a post and no one responds within an hour.” The feature is not yet available in all regions or for all groups, according to the page. For groups where it is available, “admins can turn it off and back on at any time.”

Meta AI has also been integrated into search features on Facebook and Instagram, and users cannot turn it off.

As a researcher who studies both online communities and AI ethics, I find the idea of uninvited chatbots answering questions in Facebook groups to be dystopian for a number of reasons, starting with the fact that online communities are for people.
Human Connections

In 1993, Howard Rheingold published the book “The Virtual Community: Homesteading on the Electronic Frontier” about the WELL, an early and culturally significant online community. The first chapter opens with a parenting question: What to do about a “blood-bloated thing sucking on our baby’s scalp.”

Rheingold received an answer from someone with firsthand knowledge of dealing with ticks and had resolved the problem before receiving a callback from the pediatrician’s office. Of this experience, he wrote, “What amazed me wasn’t just the speed with which we obtained precisely the information we needed to know, right when we needed to know it. It was also the immense inner sense of security that comes with discovering that real people – most of them parents, some of them nurses, doctors, and midwives – are available, around the clock, if you need them.”

This “real people” aspect of online communities continues to be critical today. Imagine why you might pose a question to a Facebook group rather than a search engine: because you want an answer from someone with real, lived experience or you want the human response that your question might elicit – sympathy, outrage, commiseration – or both.

Decades of research suggests that the human component of online communities is what makes them so valuable for both information-seeking and social support. For example, fathers who might otherwise feel uncomfortable asking for parenting advice have found a haven in private online spaces just for dads. LGBTQ+ youth often join online communities to safely find critical resources while reducing feelings of isolation. Mental health support forums provide young people with belonging and validation in addition to advice and social support.

Online communities are well-documented places of support for LGBTQ+ people.

In addition to similar findings in my own lab related to LGBTQ+ participants in online communities, as well as Black Twitter, two more recent studies, not yet peer-reviewed, have emphasized the importance of the human aspects of information-seeking in online communities.

One, led by PhD student Blakeley Payne, focuses on fat people’s experiences online. Many of our participants found a lifeline in access to an audience and community with similar experiences as they sought and shared information about topics such as navigating hostile healthcare systems, finding clothing and dealing with cultural biases and stereotypes.

Another, led by Ph.D student Faye Kollig, found that people who share content online about their chronic illnesses are motivated by the sense of community that comes with shared experiences, as well as the humanizing aspects of connecting with others to both seek and provide support and information.
Faux People

The most important benefits of these online spaces as described by our participants could be drastically undermined by responses coming from chatbots instead of people.

As a type 1 diabetic, I follow a number of related Facebook groups that are frequented by many parents newly navigating the challenges of caring for a young child with diabetes. Questions are frequent: “What does this mean?” “How should I handle this?” “What are your experiences with this?” Answers come from firsthand experience, but they also typically come with compassion: “This is hard.” “You’re doing your best.” And of course: “We’ve all been there.”

A response from a chatbot claiming to speak from the lived experience of caring for a diabetic child, offering empathy, would not only be inappropriate, but it would be borderline cruel.

However, it makes complete sense that these are the types of responses that a chatbot would offer. Large language models, simplistically, function more similarly to autocomplete than they do to search engines. For a model trained on the millions and millions of posts and comments in Facebook groups, the “autocomplete” answer to a question in a support community is definitely one that invokes personal experience and offers empathy – just as the “autocomplete” answer in a Buy Nothing Facebook group might be to offer someone a gently used camera.

Meta has rolled out an AI assistant across its social media and messaging apps.
Keeping Chatbots In Their Lanes

This isn’t to suggest that chatbots aren’t useful for anything – they may even be quite useful in some online communities, in some contexts. The problem is that in the midst of the current generative AI rush, there is a tendency to think that chatbots can and should do everything.

There are plenty of downsides to using large language models as information retrieval systems, and these downsides point to inappropriate contexts for their use. One downside is when incorrect information could be dangerous: an eating disorder helpline or legal advice for small businesses, for example.

Research is pointing to important considerations in how and when to design and deploy chatbots. For example, one recently published paper at a large human-computer interaction conference found that though LGBTQ+ individuals lacking social support were sometimes turning to chatbots for help with mental health needs, those chatbots frequently fell short in grasping the nuance of LGBTQ+-specific challenges.

Another found that though a group of autistic participants found value in interacting with a chatbot for social communication advice, that chatbot was also dispensing questionable advice. And yet another found that though a chatbot was helpful as a preconsultation tool in a health context, patients sometimes found expressions of empathy to be insincere or offensive.

Responsible AI development and deployment means not only auditing for issues such as bias and misinformation but also taking the time to understand in which contexts AI is appropriate and desirable for the humans who will be interacting with them. Right now, many companies are wielding generative AI as a hammer, and as a result, everything looks like a nail.

Many contexts, such as online support communities, are best left to humans.

Casey Fiesler is an Associate Professor of Information Science at the University of Colorado Boulder. This article is republished from The Conversation under a Creative Commons license. Read the original article.

https://www.discovermagazine.com/technology/ai-chatbots-are-intruding-into-online-communities
https://theconversation.com/ai-chatbots-are-intruding-into-online-communities-where-people-are-trying-to-connect-with-other-humans-229473



Student's creed: everything is due, and nothing is submitted

Offline Icequeen

  • News Box Slave
  • Insane Postwhore
  • *****
  • Posts: 12027
  • Karma: 2030
  • Gender: Female
  • I peopled today.
Re: AI Chatbots Are Intruding Into Online Communities
« Reply #1 on: May 27, 2024, 11:08:55 AM »
It does not surprise me in the least that Facebook would be among the first to implement AI usage in this manner. It's a f*cking dumpster fire (although it's become a necessary one for some of us) for various reasons and this will just nudge it a little further into the flames.

AI technology has the potential to transform lives for the better and do a lot of good with proper usage.

It's just the complete lack of policing that usage and keeping the a$$hats from abusing the technology that scares me.
Too many sick and twisted people out there.

It will get worse before it gets better I think.

Offline odeon

  • Witchlet of the Aspie Elite
  • Webmaster
  • Postwhore Beyond Repair
  • *****
  • Posts: 108879
  • Karma: 4482
  • Gender: Male
  • Replacement Despot
Re: AI Chatbots Are Intruding Into Online Communities
« Reply #2 on: June 12, 2024, 11:20:37 AM »
I was attending a conference last week. The first three talks directly involved AI and several others mentioned it. I made sure mine didn't.
"Only two things are infinite, the universe and human stupidity, and I'm not sure about the former."

- Albert Einstein

Offline conlang returns

  • Elder
  • Postwhore
  • *****
  • Posts: 1018
  • Karma: 57
  • Gender: Female
  • I'm a fox UwU
    • My twitter
Re: AI Chatbots Are Intruding Into Online Communities
« Reply #3 on: August 27, 2024, 11:31:49 PM »
It does not surprise me in the least that Facebook would be among the first to implement AI usage in this manner. It's a f*cking dumpster fire (although it's become a necessary one for some of us) for various reasons and this will just nudge it a little further into the flames.

AI technology has the potential to transform lives for the better and do a lot of good with proper usage.

It's just the complete lack of policing that usage and keeping the a$$hats from abusing the technology that scares me.
Too many sick and twisted people out there.

It will get worse before it gets better I think.

It will definitely get worse

https://getpocket.com/explore/item/why-nothing-works-anymore



Student's creed: everything is due, and nothing is submitted

Offline conlang returns

  • Elder
  • Postwhore
  • *****
  • Posts: 1018
  • Karma: 57
  • Gender: Female
  • I'm a fox UwU
    • My twitter
Re: AI Chatbots Are Intruding Into Online Communities
« Reply #4 on: August 27, 2024, 11:33:08 PM »
Why nothing works article:

Quote
“No… it’s a magic potty,” my daughter used to lament, age 3 or so, before refusing to use a public restroom stall with an automatic-flush toilet. As a small person, she was accustomed to the infrared sensor detecting erratic motion at the top of her head and violently flushing beneath her. Better, in her mind, just to delay relief than to subject herself to the magic potty’s dark dealings.

It’s hardly just a problem for small people. What adult hasn’t suffered the pneumatic public toilet’s whirlwind underneath them? Or again when attempting to exit the stall? So many ordinary objects and experiences have become technologized—made dependent on computers, sensors, and other apparatuses meant to improve them—that they have also ceased to work in their usual manner. It’s common to think of such defects as matters of bad design. That’s true, in part. But technology is also more precarious than it once was. Unstable, and unpredictable. At least from the perspective of human users. From the vantage point of technology, if it can be said to have a vantage point, it's evolving separately from human use.

“Precarity” has become a popular way to refer to economic and labor conditions that force people—and particularly low-income service workers—into uncertainty. Temporary labor and flexwork offer examples. That includes hourly service work in which schedules are adjusted ad-hoc and just-in-time, so that workers don’t know when or how often they might be working. For low-wage food service and retail workers, for instance, that uncertainty makes budgeting and time-management difficult. Arranging for transit and childcare is difficult, and even more costly, for people who don’t know when—or if—they’ll be working.

Such conditions are not new. As union-supported blue-collar labor declined in the 20th century, the service economy took over its mantle absent its benefits. But the information economy further accelerated precarity. For one part, it consolidated existing businesses and made efficiency its primary concern. For another, economic downturns like the 2008 global recession facilitated austerity measures both deliberate and accidental. Immaterial labor also rose—everything from the unpaid, unseen work of women in and out of the workplace, to creative work done on-spec or for exposure, to the invisible work everyone does to construct the data infrastructure that technology companies like Google and Facebook sell to advertisers.

But as it has expanded, economic precarity has birthed other forms of instability and unpredictability—among them the dubious utility of ordinary objects and equipment.

The contemporary public restroom offers an example. Infrared-sensor flush toilets, fixtures, and towel-dispensers are sometimes endorsed on ecological grounds—they are said to save resources by regulating them. But thanks to their overzealous sensors, these toilets increase water or paper consumption substantially. Toilets flush three times instead of one. Faucets open at full-blast. Towel dispensers mete out papers so miserly that people take more than they need. Instead of saving resources, these apparatuses mostly save labor and management costs. When a toilet flushes incessantly, or when a faucet shuts off on its own, or when a towel dispenser discharges only six inches of paper when a hand waves under it, it reduces the need for human workers to oversee, clean, and supply the restroom.

Given its connection to the hollowing-out of labor in the name of efficiency, automation is most often lamented for its inhumanity, a common grievance of bureaucracy. Take the interactive voice response (IVR) telephone system. When calling a bank or a retailer or a utility for service, the IVR robot offers recordings and automated service options to reduce the need for customer service agents—or to discourage customers from seeking them in the first place.

Once decoupled from their economic motivations, devices like automatic-flush toilets acclimate their users to apparatuses that don’t serve users well in order that they might serve other actors, among them corporations and the sphere of technology itself. In so doing, they make that uncertainty feel normal.

It’s a fact most easily noticed when using old-world gadgets. To flush a toilet or open a faucet by hand offers almost wanton pleasure given how rare it has become. A local eatery near me whose interior design invokes the 1930s features a bathroom with a white steel crank-roll paper towel dispenser. When spun on its ungeared mechanism, an analogous, glorious measure of towel appears directly and immediately, as if sent from heaven.

***

Rolling out a proper portion of towel feels remarkable largely because that victory also seems so rare, even despite constant celebrations of technological accomplishment. The frequency with which technology works precariously has been obscured by culture’s obsession with technological progress, its religious belief in computation, and its confidence in the mastery of design. In truth, hardly anything works very well anymore.

The other day I attempted to congratulate my colleague Ed Yong for becoming a Los Angeles Times Book Prize finalist. I was tapping “Awesome, Ed!” into my iPhone, but it came out as “Aeromexico, Ed!” What happened? The iPhone’s touchscreen keyboard works, in part, by trying to predict what the user is going to type next. It does this invisibly, by increasing and decreasing the tappable area of certain keys based on the previous keys pressed. This method—perhaps necessary to make the software keyboard work at all—amplifies a mistype that autocorrect then completes. And so goes the weird accident of typing on today’s devices, when you hardly ever say what you mean the first time.

The effects of business consolidation and just-in-time logistics offer another example. Go to Amazon.com and search for an ordinary product like a pair of shoes or a toaster. Amazon wants to show its users as many options as possible, so it displays anything it can fulfill directly or whose fulfillment it can facilitate via one of many catalog partnerships. In some cases, one size or color of a particular shoe might be available direct from Amazon, shipped free or fast or via its Prime two-day delivery service, while another size or color might come from a third party, shipped later or at increased cost. There is no easy way to discern what’s truly in stock.

Digital distribution has also made media access more precarious. Try explaining to a toddler that the episodes of “Mickey Mouse Clubhouse” that were freely available to watch yesterday via subscription are suddenly available only via on-demand purchase. Why? Some change in digital licensing, probably, or the expiration of a specific clause in a distribution agreement. Then try explaining that when the shows are right there on the screen, just the same as they always have been.

Or, try looking for some information online. Google’s software displays results based on a combination of factors, including the popularity of a web page, its proximity in time, and the common searches made by other people in a geographic area. This makes some searches easy and others difficult. Looking for historical materials almost always brings up Wikipedia, thanks to that site’s popularity, but it doesn’t necessarily fetch results based on other factors, like the domain expertise of its author. As often as not, Googling obscures more than it reveals.

Most of these failures don’t seem like failures, because users have so internalized their methods that they apologize for them in advance. The best defense against instability is to rationalize uncertainty as intentional—and even desirable.

***

The common response to precarious technology is to add even more technology to solve the problems caused by earlier technology. Are the toilets flushing too often? Revise the sensor hardware. Is online news full of falsehoods? Add machine-learning AI to separate the wheat from the chaff. Are retail product catalogs overwhelming and confusing? Add content filtering to show only the most relevant or applicable results.

But why would new technology reduce rather than increase the feeling of precarity? The more technology multiplies, the more it amplifies instability. Things already don’t quite do what they claim. The fixes just make things worse. And so, ordinary devices aren’t likely to feel more workable and functional as technology marches forward. If anything, they are likely to become even less so.

Technology’s role has begun to shift, from serving human users to pushing them out of the way so that the technologized world can service its own ends. And so, with increasing frequency, technology will exist not to serve human goals, but to facilitate its own expansion.

This might seem like a crazy thing to say. What other purpose do toilets serve than to speed away human waste? No matter its ostensible function, precarious technology separates human actors from the accomplishment of their actions. They acclimate people to the idea that devices are not really there for them, but as means to accomplish those devices own, secret goals.

This truth has been obvious for some time. Facebook and Google, so the saying goes, make their users into their products—the real customer is the advertiser or data speculator preying on the information generated by the companies’ free services. But things are bound to get even weirder than that. When automobiles drive themselves, for example, their human passengers will not become masters of a new form of urban freedom, but rather a fuel to drive the expansion of connected cities, in order to spread further the gospel of computerized automation. If artificial intelligence ends up running the news, it will not do so in order to improve citizen’s access to information necessary to make choices in a democracy, but to further cement the supremacy of machine automation over human editorial in establishing what is relevant.

There is a dream of computer technology’s end, in which machines become powerful enough that human consciousness can be uploaded into them, facilitating immortality. And there is a corresponding nightmare in which the evil robot of a forthcoming, computerized mesh overpowers and destroys human civilization. But there is also a weirder, more ordinary, and more likely future—and it is the one most similar to the present. In that future, technology’s and humanity’s goals split from one another, even as the latter seems ever more yoked to the former. Like people ignorant of the plight of ants, and like ants incapable of understanding the goals of the humans who loom over them, so technology is becoming a force that surrounds humans, that intersects with humans, that makes use of humans—but not necessarily in the service of human ends. It won’t take a computational singularity for humans to cede their lives to the world of machines. They’ve already been doing so, for years, without even noticing.
Ian Bogost is a contributing editor at The Atlantic and the Ivan Allen College Distinguished Chair in Media Studies at the Georgia Institute of Technology. His latest book is Play Anything.



Student's creed: everything is due, and nothing is submitted