It may shock some people, but I may have gone through a rather “large” science fiction phase as a kid that some people may see as “obsessive.” I don’t see it that way, but, you know, it’s fine. But I vividly remember hating sci-fi movies about AI. The concept terrified me as a kid, especially after watching movies such as I, Robot; 2001: A Space Odyssey, and possibly the most traumatizing of all, the Disney Channel Original, Smart House. Young Elisa fully believed that if AI became a regular part of our society, it would definitely take over civilization. Do I still believe that? Not necessarily.
The older and more tech-savvy I’ve gotten throughout the years, the more I’ve seen that AI can do a lot of good. Even in the past year, the advancement made in AI (such as Stable Diffusion, ChatGPT, etc.) has been unbelievable to watch. It has transformed multiple industries, including the legal profession and made them more accessible to the public, which is always a good thing!
However, with all of the benefits of AI, it also raises significant legal, ethical, and social challenges. For example, if an attorney or law firm were to use ChatGPT in its practice and the system made an error or omission in legal advice to a client, who could be held responsible? How far of a net could be cast? Would only the attorney or law firm who used the AI be held liable? Or could it even include the AI developer and the website hosting the program?
Leave a Reply