AI Hallucinations: Lucidworks Validates Truth In Responses
Lucidworks’ search and artificial intelligence (AI) platforms got upgrades, providing what the company calls “necessary guardrails” to validate generative artificial intelligence (GAI) responses.
The company Thursday said its technology now can integrate with any large language model and validate truth and minimize errors such as hallucinations, which can occur when the models used to train the data sets have been exposed to biases or untruths.
Mike Sinoway, CEO at Lucidworks, wants the company to set the standard in commercially viable applications of AI in search management. He said the company is working with multiple clients to ensure GAI responses are accurate, secure, and personalized.
GAI tools such as OpenAI’s ChatGPT and Google’s Vertex.ai make GAI technology for search more accessible and create new opportunities to build digital experiences, but the technology has also come under fire.
The Federal Trade Commission (FTC) announced today it is investigating OpenAI, a company Microsoft invested billions of dollars in, for possibly publishing false information, harming individuals with untruths.
Adobe Chief Trust Officer Dana Rao Wednesday participated in the U.S. Senate Judiciary Subcommittee Hearing on AI and intellectual property and copyright. He suggested an “anti-impersonation right” be made a federal requirement.
The anti-impersonation right would apply to everyone, so when an AI model is trained on an artist and it creates content exactly like that artist, it would protect them against copyright infringement, Roa said.
Lucidworks also released Neural Hybrid Search, a capability that makes it easier to understand a user’s intent, and “Smart Rank,” an ecommerce browse solution that automates content and results rankings based on a user’s context, region, operating industry, and navigation path.
(6)