Fundamentally, I have no problem using AI to do research as long as you verify the results. I regularly use AI to research things like programming questions or to check somebody else's claims about something. I look at it as a far more efficient google search. Instead of wading through a lot of search results that are often very dated, I get something pithy. But I test the results. If the AI gives you a result that should in theory be correct but isn't because the programming language doesn't have the function the AI thinks it should have, I tell it that it's wrong. I half hope that it would correct the model but who knows if that happens. It would be cool if the people who wrote the programming language did this so they could add functionality that should really be there.
In this case, a paralegal should have been dispatched to go look up the citations. Not that complicated. AI is the natural progression that started with a physical library and then to the internet. AI accurately (not correctly) incorporates human "expertise" into the information. It doesn't currently have a good way to check that the human "expertise" is bullshit or not. We still need to verify the information.