AI is quickly changing legal systems all throughout the world, which raises big challenges about who is responsible, who is liable, and how to govern ethically. As AI-driven tools have more and more of an effect on things like contract analysis, compliance monitoring, and predictive policing, the conventional bases of legal accountability are facing new and difficult problems. It is harder to find errors in advanced AI systems since they are independent and flexible. This is especially true when results come from methods that aren't clear, machine-learning biases, or decentralized data processes. This new landscape calls for a re-evaluation of liability frameworks to figure out who should be responsible: the creators, the users, the deployers, or the AI systems themselves. The incorporation of AI necessitates strong legislative frameworks that guarantee transparency, elucidation, and equity. This study examines the changing ways that countries around the world are dealing with these issues, suggests models for shared responsibility, and stresses the necessity for consistent legal norms to find a balance between innovation and justice. In the end, the future of AI in law hinges on how well it fits with ideas of accountability and human rights.