To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge-org.demo.remotlog.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The AI Act contains some specific provisions dealing with the possible use of artificial intelligence for discriminatory purposes or in discriminatory ways, in the context of the European Union. The AI Act also regulates generative AI models. However, these two respective sets of rules have little in common: provisions concerning non-discrimination tend not to cover generative AI, and generative AI rules tend not to cover discrimination. Based on this analysis, the Chapter considers what is currently the Eu legal framework on discriminatory output of generative AI models, and concludes that those expressions that are already prohibited by anti-discrimination law certainly remain prohibited after the approval of the AI Act, while discriminatory content that is not covered by Eu non-discrimination legislation will remain lawful. For the moment, the AI Act has not brought any particularly relevant innovation on this specific matter, but the picture might change in the future.
Threats are not protected speech, but defining what constitutes a threat has been problematic, particularly when it comes to online speech. We start with a look at threats against the US president, beginning with a 1798 prosecution for threatening John Adams, and leading up to the passage in 1917 of the first federal legislation against threatening the president. We will look at WW I-era prosecutions for threatening the president, leading up to a 1969 Supreme Court decision, Watts v. US, distinguishing true threats from protected political speech. And we conclude with two cases of online threats: the prosecution of Anthony Elonis for posting threats on Facebook, and a case where two tourists were denied entry to the US because of joking tweets that were treated as threats by US Border Agents. We conclude that threats, like obscenity, remain unprotected speech, but defining what is and what is not a threat in any particular case remains a problematic, subjective decision.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.