-
Governments are exploring the use of AI chatbots to automate services and advice, but experts warn of limitations and potential errors.
-
While AI chatbots can provide human-like responses, they can also make mistakes or provide nonsensical answers, known as hallucinations.
-
Estonia is developing chatbots using Natural Language Processing (NLP) technology, which is less prone to errors but has limited potential compared to Large Language Models (LLMs) like ChatGPT.
Governments are keen to use AI chatbots to improve their services and advice, but experts are warning of the limitations and potential errors of these systems. While AI chatbots can provide human-like responses, they can also make mistakes or provide nonsensical answers, known as hallucinations. This raises concerns about the reliability and accountability of these systems, especially in areas where accuracy is crucial, such as government services.
Estonia is taking a different approach, developing chatbots using Natural Language Processing (NLP) technology, which is less prone to errors but has limited potential compared to Large Language Models (LLMs) like ChatGPT. Estonia’s chatbots use NLP algorithms to break down requests into small segments, identify key words, and infer what the user wants. While this approach may not be as advanced as LLM-based chatbots, it provides more control and transparency over the system and reduces the risk of errors.