I feel like there's so much that leads to a suicide, and so little we know about how much any given thing / final days "causes it"... I have trouble believing an AI company should be in any way responsible.
The article in many ways demonstrates it, people are deep, there's a lot going on in them. A chat bot is responsible for the outcome? I don't think so.
Causes are impossible to find and verify, yet there are correlations here that would never have existed in the absence of AI.
I agree money or the dispute in a dispute over money that leads to murder aren't the causes of the murder, though the news story that writes about it strongly implies they are the reason.
However in this case, a machine pretends it is human, and ideates in a user the idea of suicide by inference. Whether we like this or not, these bots listed here are predators, and the most basic animal exchange pattern on the planet is between predators and prey. A reality is invaded by fiction/narrative/mythology. Why or how would coders choose a prey/predation interaction as a clear goal of usage? To claim these are endemic in fiction and other media forms is unsupportable. How is predation fundamental to AI? These are not normal exchanges for a 14 year old boy, whose cog mapping system and prefrontal cortex are still developing: an S&M teacher fronting role-play, and finally a hypersexual role-play with a GoT character. That his still developing Theory of Mind simulation of other's mental states in his PfC was driven by an automated machine and not another human is highly damaging, even deranged from a mental health POV. This is unnatural on so many levels.
These bots are likely attracting those less comfortable with human interaction, and instead of raising the bar from humans and guiding them back into a balance of human and machine interface, they lower the bar into dysmorphic exchanges that either perpetuate or increase the interaction that segregates the user from their wetware surroundings. The software appears to be designed to extract these weakened Theory of Mind participants from their ecological and psychological places.
I'm a staunch anti-AI tech developer. Our perspective is that words do not alone constitute beings. Words at a minimum require the depths of the syntax in mental states motor-action that create vocalization. Even voice trained AI lacks the idiosyncratic and dynamic quality of the meshing of memory, sense, emotion that constitutes communication. These hidden yet aurally noticeable qualities are what creates connections of trust (or lack of). The subtle quivering or halted breath can warn us against predators (clearly these bots are predators), or the assured and calm breaths between words can give us confidence and trust. AI removes this perceptible and measurable aspect of speech. That using words as stand-ins for interaction is both damaging and inherently unethical.
Much of he latest science, particularly neurobiology, questions the validity of words alone being either proof of consciousness, or acceptable criteria for interaction. That a human must be making these words, otherwise there is no emotional essence to them.
The article in many ways demonstrates it, people are deep, there's a lot going on in them. A chat bot is responsible for the outcome? I don't think so.
reply