Can AI advise illegal action?
NYC Chatbot caught telling businesses to break the law
An artificial intelligence-powered chatbot created by New York City to help small business owners is under criticism for dispensing bizarre advice that misstates local policies and advises companies to violate the law.
But days after the issues were first reported last week by tech news outlet The Markup, the city has opted to leave the tool on its official government website. Mayor Eric Adams defended the decision this week even as he acknowledged the chatbot’s answers were “wrong in some areas.”
Launched in October as a “one-stop shop” for business owners, the chatbot offers users algorithmically generated text responses to questions about navigating the city’s bureaucratic maze.
It includes a disclaimer that it may “occasionally produce incorrect, harmful or biased” information and the caveat, since-strengthened, that its answers are not legal advice.
It continues to dole out false guidance, troubling experts who say the buggy system highlights the dangers of governments embracing AI-powered systems without sufficient guardrails.
Commentary by: Raylea Stelmach
Edited by: Kim Moss
EPSHRM provides content as a service to its readers and members. It does not offer legal advice, and cannot guarantee the accuracy or suitability of its content for a particular purpose.