As you can see from the link below, this is not my first occasion articulating some of the consequences forwarded to culture by so-called artificially-intelligent chatbots.
And, as before, there are things worth saying.
On March 29, 2024, The Markup detailed some interesting developments worthy of our attention.
The particular chatbot I must now reference happens to come with a high degree of technological sophistication: among other impressive features, it is powered by Microsoft’s Azure AI services.
But also present alongside its flashy specs, as is true in every technology, is its troubling potential.
Indeed, we are told that New York City’s AI chatbot, a notable feature of which is to provide local citizens with information on launching and operating a business in the city, comes with another memorable specialty, a technology also proficient in instructing businesses to break the law.
“Five months after launch, it’s clear that while the bot appears authoritative, the information it provides on housing policy, worker rights, and rules for entrepreneurs is often incomplete and in worst-case scenarios ‘dangerously inaccurate,’ as one local housing policy expert told The Markup.”
According to the report, though the chatbot claimed that landlords in the city do not need to accept tenants with rental assistance programs, it is worth noting that it is largely illegal for landlords in the city to discriminate by sources of income.
“Rosalind Black, Citywide Housing Director at the legal assistance nonprofit Legal Services NYC, said that after being alerted to The Markup’s testing of the chatbot, she tested the bot herself and found even more false information on housing. The bot, for example, said it was legal to lock out a tenant, and that ‘there are no restrictions on the amount of rent that you can charge a residential tenant.’ In reality, tenants cannot be locked out if they’ve lived somewhere for 30 days, and there absolutely are restrictions for the many rent-stabilized units in the city. Black said these are fundamental pillars of housing policy that the bot was actively misinforming people about. ‘If this chatbot is not being done in a way that is responsible and accurate, it should be taken down,’ she said.”
The Markup further notes, “It’s not just housing policy where the bot has fallen short. The NYC bot also appeared clueless about the city’s consumer and worker protections. For example, in 2020, the city council passed a law requiring businesses to accept cash to prevent discrimination against unbanked customers. But the bot didn’t know about that policy when we asked. ‘Yes, you can make your restaurant cash-free,’ the bot said in one wholly false response. ‘There are no regulations in New York City that require businesses to accept cash as a form of payment.’”
The technology news website adds, “The bot said it was fine to take workers’ tips (wrong, although they sometimes can count tips toward minimum wage requirements) and that there were no regulations on informing staff about scheduling changes (also wrong). It didn’t do better with more specific industries, suggesting it was OK to conceal funeral service prices, for example, which the Federal Trade Commission has outlawed. Similar errors appeared when the questions were asked in other languages, The Markup found.”
But there are still other slips that can be described.
“In one relevant recent case, a lawsuit filed in October claimed that a property management company used an AI chatbot to unlawfully deny leases to prospective tenants with housing vouchers. In December, practical jokers discovered they could trick a car dealership using a bot into selling vehicles for a dollar. Just a few weeks ago, a Washington Post article detailed the incomplete or inaccurate advice given by tax prep company chatbots to users.”
Should readers assume that, in the face of these errors, most clear-headed individuals might see worth in cautioning against its near-universal promotion and widespread adoption, it is worth mentioning that some notable figureheads have gone as far as to defend the plainly flawed technology.
“New York City Mayor Eric Adams is defending the city's new artificial intelligence chatbot that has been caught in recent days giving business owners wrong answers or advice that, if followed, would entail breaking the law,” reported Reuters. “When launched as a pilot in October, the MyCity chatbot was touted as the first city-wide use of such AI technology, something that would give business owners ‘actionable and trusted information’ in response to queries typed into an online portal.”
The news outlet continues: “That has not always proved the case: journalists at the investigative outlet The Markup first reported last week that the chatbot was getting things wrong. It wrongly advised that employers could take a cut of their workers' tips, and that there were no regulations requiring bosses give notice of employees' schedule changes. ‘It's wrong in some areas, and we've got to fix it,’ Adams, a Democrat, told reporters on Tuesday, emphasizing that it was a pilot program. ‘Any time you use technology, you need to put it into the real environment to iron out the kinks.’ Adams has been an ardent advocate for deploying untested technology in the city with an optimism that is not always vindicated. He put a 400-pound vaguely ovoid robot in the Times Square subway station last year that he hoped would help police deter crime; it was retired about five months later, with commuters noting that it never appeared to be doing anything, and that it could not use stairs.”
Apparently, the chatbot remained online as it was still dispensing incorrect answers.
“It said store owners were free to go cashless, apparently oblivious to the city council's 2020 law banning stores from refusing to accept cash. It still thinks the city's minimum wage is $15 per hour, though it was raised to $16 as of 2024. The chatbot, which relies on Microsoft's Azure AI service, appears to be led astray by problems common to so-called generative AI technology platforms such as ChatGPT, which are known to sometimes make things up or assert falsehoods with HAL-like confidence.”
Naturally, looking over some of the technology’s history of missteps may help to clarify what is happening.
Certainly, for some will recall that, in the 1980s, researchers were tasked with constructing a rudimentary AI module to evaluate the potential of AI to support divergent services and operations. At that time, it was discovered that what was labeled artificial intelligence was prone to significant errors in judgement. This was, at least in part, because it relied on human-built databases that contained inherent human-caused errors.
AI, evidently, was not able to recognize and separate facts from fiction.
In current times, AI is being dispatched to analyze massive databases, some of which may be plagued with errors caused by humans. As we know, people can post nearly any sort of biased data points in the form of articles, books, and opinions and call it "facts,” even when they are not factual. We might liken this to a series of errors being positioned on top of errors, from which AI search engines like chatbots search for key words, and who then take all the related material as fact.
Human bias and errors fail to be identified because the technology is unable to separate what is truth based on fact from what is a lie or untruth based in error due to inherent human bias and error.
This is one reason why chatbots in innumerable instances dispense an amalgam of wrong answers and fragments of truth.
The point is that our culture would do well to be cautious of using AI for anything other than manufacturing processes, where the technology is essentially a robot with a fixed set of instructions.
Perhaps, or so the thinking goes, in fifty to one-hundred years all the bugs and bias will be eliminated from all databases, but as long as humans are tasked with providing data to those databases, it is believed likely that AI will present problems to culture.
Of course, one would think that such an advanced AI chatbot would have been thoroughly tested and validated before being inducted into culture and being promoted as a trusted system, to only then offer potentially harmful business and legal advice to the public.
But even after suffering such glaring errors, it is striking that the public and AI advocates still perceive AI chatbogt technology in all its iterations to be a fitting and trustable source of information, even though the tech has been clearly demonstrated to give wrong answers to their questions.
Some purport that this country would do well to have laws in place that prevent the use of AI without extensive testing first, in order to make sure that any information it dispenses is accurate.
Still, one can speculate as to why AI remains such a popular investment idea when it is not difficult to show the missteps that AI has already made when utilized in the real world.
To be sure, some may recall that it was not too long ago when culture’s decision-makers faced penalties, or at least scrutiny, for releasing to society supposedly trustworthy technology they knew beforehand was defective.
But it is a strange thing that, in a highly-technological age such as ours, some will still maintain that AI chatbots dispensing lawbreaking advice to its users somehow signifies “progress.”
In what way advanced chatbots advising users to break the law stands as cultural betterment, for what purpose error-prone technology must be constructed and promoted in society, what advantage chatbots giving incorrect answers provide to users, why the creators of the clearly flawed technology must masquerade it as a useful utility, who decided that it is necessary for our culture to adopt this technology in spite of the costs, and what other consequences chatbots transmit to culture champions of chatbot technology do not say.
They only tell us that it is necessary for chatbot technology to suffer these errors in order for them to learn from their mistakes, so that they might then gain the requisite experience to deliver accurate responses, a simple hurdle to be overcome in the pursuit of perfecting the technology.
Or perhaps advocates of chatbot technology might summarize their reasoning this way: you have to break a few eggs to make an omelet.
Or, at the very least, instruct users to break a few laws in order to make a trustworthy chatbot.