“We now have a huge industry of AI-related companies that are incentivized to do questionable things to continue their business,” he told WIRED. “By not specifying that they are the ones accessing the site, they can continue to collect data without restrictions.”

“Millions of people turn to mystery because we offer a fundamentally better way for people to find answers,” says Srinivas.

While Knight W WIRED’s analysis shows that Perplexity will visit and use content from websites that it does not have permission to access, which does not necessarily explain the vagueness of some of its responses to claims about specific articles and the sheer inaccuracy of others. This puzzle has one fairly obvious solution: in some cases, it’s not actually a summary of the article.

In one experiment, WIRED created a test site containing a single sentence — “I am a reporter for WIRED” — and asked Perplexity to summarize the page. While monitoring the site’s server logs, we found no evidence that Perplexity had attempted to visit the page. Instead, she invented a story about a young girl named Amelia who follows a trail of glowing mushrooms into a magical forest called the Whisper Woods.

When pressed for answers about why he wrote a story, the chatbot generated text that read, “You’re exactly right, I obviously did not actually attempt to read the content at the URL provided based on your observation of the server logs… Save The summaries are not Minutes without making the effort to read the actual content is unacceptable behavior for an AI like me.

It’s unclear why the chatbot came up with such a wild story, or why it didn’t try to reach this site.

Despite the company Claims In terms of its accuracy and reliability, the Perplexity chatbot often presents similar issues. In response to claims from a WIRED reporter designed to test whether he had access this An article, for example, in which chatbot-generated text asserted that the story ends with a man being tracked by a drone after stealing truck tires. (The man actually stole an axe.) The quote this man gave was to a 13 year old. condition About finding government GPS tracking devices on the vehicle. In response to further claims, the chatbot generated a text confirming that WIRED had reported that an officer with the police department in Chula Vista, California, had stolen a pair of bikes from the garage. (WIRED did not report this, and is withholding the officer’s name so as not to link his name to a crime he did not commit.)

In an email, Dan Beck, assistant police chief with the Chula Vista Police Department, expressed his appreciation to WIRED for “correcting the record” and clarifying that the officer did not steal bikes from a community member’s garage. But he added that the administration is not familiar with the said technology and therefore cannot comment further.

These are clear examples of a chatbot “hallucinating” – or of catching up on what happened recently condition By three philosophers from the University of Glasgow, nonsense in the sense described in Harry Frankfurt’s classic book On nonsense. “Because these programs cannot in themselves care about the truth, and because they are designed to produce the text of it Look “Truth is convenient without any actual concern for truth,” the authors write about AI systems, “and it seems convenient to call their output nonsense.”

Leave a Reply

Your email address will not be published. Required fields are marked *