You're really stuck on the surface level semantics of this and it seems like you're trying to be right on a technicality rather than engaging in the discussion of the actual effect and issue here, which I find strange.
I'm going to stop responding because this is a waste of time but I'll leave you on this:
You are saying that this case is due to user error, they did not correctly use the tool, therefore they prompted an incorrect answer.
To be able to simultaneously believe this and also believe that AI is a useful tool, you must believe that this user error was avoidable.
So I ask you, what would you use as a prompt if you were those lawyers that is better than their "where did you get this filing from?" and "What is the source of these documents?" that would prompt the truthful response from the tool? I ask as these questions prompted the "they're from PACER" response.
If you cannot find a clearer "less leading" question, then it's not user error, because it's not reasonable to assume any user would be able to use the tool in a way were it functions as advertised without not only producing falsehoods at the behest of the user but also against their will and command.