3 comments

  • namaria a month ago ago

    They do and it is very noticeable for me as a user. I pay for and use ChatGPT a lot, different models for different things. It is a great tool, but I have studied it in depth before using and I try to have meaningful conversations with people close to me about it because I think it is very important to have an accurate mental picture of what the models are and what they do to be able to use them responsibly.

    LLMs produce output that statistically approximate human language. If you feed it with relevant context and ask simple questions, in my experience, it can save a lot of time with clerical work. Producing structured text, mark up. It can also be great help as a very flexible search engine for the data it has ingested. But it goes off in the wrong direction very easily and steering it seems to be to be wasted effort. It is easier to start over and try a different approach. Overly complicated prompts are also not helpful. It is better to feed the model with documents and text that creates the right context. Giving it instructions work by accident by using the right words.

    Overall they are very useful, but the idea that they can do meaningful intellectual work it just fanciful.

  • edmundsauto a month ago ago

    A couple of points. First, this study seems relevant to someone using these tools as a search engine. The prompts (aka “queries”) are basic questions like “How did Xyz find out?” With some prompt engineering (adding more context and instructions), I bet these error rates could be significantly reduced.

    That’s said, there is no baseline which always makes me suspect. It would be interesting to see how humans would do on the same task.

    Third, I think the interesting comparison is accuracy versus cost. Some tasks are very sensitive to accuracy, some are not. By looking at the cost curve compared to humans, we could make decisions about tradeoffs for any given application.

    My advice, don’t form too much of an opinion without having built something like this yourself. It’s really easy!

  • ChrisArchitect a month ago ago

    Actual article: AI Distortion is new threat to trusted information

    https://www.bbc.co.uk/mediacentre/2025/articles/how-distorti...