New York City’s GenAI Chatbot Faces Backlash for Providing Inaccurate and Illegal Advice
The use of artificial intelligence (AI) in various aspects of our lives has become increasingly common, from virtual assistants to chatbots. However, a recent incident involving a generative AI (GenAI) chatbot developed by New York City has raised concerns about the accuracy and reliability of such technology.
The “MyCity” chatbot, powered by Microsoft’s Azure AI services, has come under fire for providing incorrect information and even advising small business owners to break the law. The bot has been sharing insights on housing policy, worker rights, and rules for entrepreneurs, often with dangerously inaccurate information.
For example, the bot incorrectly stated that landlords do not need to accept tenants on rental assistance or section 8 vouchers, which goes against the laws in New York City. It also advised that stores could be cashless, despite a requirement for stores in the city to accept cash as payment since 2020.
Despite these inaccuracies, the bot is still available online, giving out false guidance. While New York City has strengthened its disclaimer by noting that the bot’s answers are not legal advice, critics argue that the city has not implemented sufficient safeguards to prevent such misinformation.
Julia Stoyanovich, a Computer Science Professor at New York University, has criticized the city’s approach as reckless and irresponsible. She argues that rolling out unproven software without oversight is a dangerous move that could harm users.
The legal implications of providing inaccurate advice through AI technology are also significant. A recent case involving Air Canada, where the airline was sued for its chatbot sharing inaccurate advice, serves as a cautionary tale. The Canadian courts ruled in favor of the claimant, holding Air Canada responsible for the information provided by its chatbot.
In response to the controversy, Microsoft has pledged to work with New York City employees to improve the chatbot’s accuracy and ensure that its outputs are based on official documentation. However, the incident serves as a reminder of the potential pitfalls of relying on AI technology without proper oversight.
As AI continues to play a larger role in our daily lives, it is essential for developers and policymakers to prioritize accuracy, transparency, and accountability. The New York City chatbot debacle serves as a cautionary tale of the risks associated with deploying AI systems without sufficient safeguards in place.