Hidayatullah National Law University (HNLU), Raipur
School of Law & Technology (SLT)
presents
Panel Discussion on
AI on Trial: Legal Liability, Regulation, and Ethics in the Age of Automation
About HNLU
Hidayatullah National Law University (HNLU), one of India’s premier institutions for legal education and research, proudly celebrates its 20th year of establishment. Guided by the motto “Dharma Sansthapanartham”—for the sake of establishing the primacy of eternal values—HNLU continues to mold future-ready legal minds committed to societal transformation.
About the School of Law & Technology (SLT)
The School of Law & Technology (SLT) is HNLU’s dedicated research hub, working at the intersection of law, technology, and society. SLT leads academic and policy research on intellectual property, data protection, cybersecurity, AI governance, and tech-policy interface, through its various specialized centers, including the Centre for Internet Governance and Artificial Intelligence.
Event Details
- Event Name: AI on Trial: Legal Liability, Regulation, and Ethics in the Age of Automation
- Date: 16th April 2025
- Time: 4:00 PM IST
- Platform: Google Meet
- Mode: Online | Open to all (Registration mandatory)
Click here to register- https://docs.google.com/forms/d/e/1FAIpQLSdUpEBZB2dHX1cXKjm0oeKBgy4dWdnBodJkFCH3fvbypg0iHQ/viewform
Brochure- https://drive.google.com/file/d/1rv1y563g0oLLlbtlztrLqc-Ou-ChKAPq/view
Concept Note
As Artificial Intelligence (AI) increasingly influences vital sectors—healthcare, law, finance, law enforcement—the need to address its legal and ethical ramifications has become urgent. Autonomous decision-making by AI systems raises fundamental questions:
- Who is legally liable when AI causes harm—developers, users, or the AI itself?
- Are existing legal frameworks adequate to handle AI-driven harms?
- Should AI regulation focus on risk-based compliance, transparency, or strict liability?
- How can we ensure justice and accountability in a world run by algorithms?
This panel will explore how the legal system can evolve to uphold justice, fairness, and responsibility in the age of automation.
Panel Objectives
- Examine Legal Liability: Who bears responsibility for AI-generated harm?
- Explore Regulatory Frameworks: What models exist globally, and how effective are they?
- Address Ethical Considerations: What ethical risks does autonomous decision-making present?
- Propose Legal & Policy Solutions: What should a future-ready, rights-based legal framework for AI look like?
