From the course: The State of AI and Copyright

What are some resources for learning more about AI and copyright issues?

From the course: The State of AI and Copyright

What are some resources for learning more about AI and copyright issues?

- In addition to some of the sites you already mentioned are there any other resources people could check out if they wanted to find out more information maybe from a more of a lay person point of view. - I mean, there's a wealth of videos online explaining how generative AI works. I mean, obviously Chat GBT. You can sign up for free and see for yourself what it's capable of doing. I'm not quite sure about whether there's good free image generators. Maybe Bing has one you can use, but yeah, you got to just experiment with this stuff and if you are in business, yeah, you want to have a policy in place to allow your people to do it safely. - Yeah. - Yeah. One resource that I really like for companies using and deploying AI systems is the National Institute of Standards and Technology in the US, because earlier this year they released the AI Risk Management Framework, and it's a high level document, but it describes some of the steps that any enterprise should undertake to use and deploy AI models safely and effectively, thinking about certain key risks. And I would encourage folks to read that. It's written at a generalized enough level that I think laymen can understand it. There are also a ton of government resources so I mentioned the US Copyright Office. They have a whole webpage devoted to AI issues. So content creators who are thinking about these issues and how to protect their work, I would direct them to the Copyright Office in addition to contacting their favorite IP attorney. But the Copyright Office, the Patent Office similarly had an AI webpage. The White House, there have been several executive orders recently on AI and specifically generative AI, like the AI Bill of Rights, notices about how do we avoid bias and discrimination in AI algorithms. And I think the hot topic is the EU's AI Acts. And one takeaway that I really like from that, and there are lots of summaries out there on EU AI Acts, is just the risk management approach. So categorizing AI systems as high risk, medium risk, and low risk, and figuring out where you fall within that paradigm. I think it's going to be really helpful and useful for figuring out what kind of steps you need to undertake to figure out how to use your systems and deploy them in a manner that's safe and maybe less risky.

Contents