No one but R Kelly pic.twitter.com/ibUVx305FK— Malesela Mokhondo (@M_arthur_M) 25. juni 2022
AI + ethics
The following ethics pledge was issued by D-ID (named after de-identification technologies), on 29/Oct/2021. It specifically addresses synthetic media including ‘virtual beings’ under the broader umbrella of artificial intelligence. Additional comments highlighted by Dr Alan D. Thompson.
1. Ethical Foundation
We will strive to develop and use technology to benefit society, even at the expense of customer and investor priorities.
AI for the benefit of society has been the hallmark of all major
Western AI labs, with OpenAI (creators of GPT-3) going on record in Jul/2019
that: “To accomplish our mission of ensuring that AGI (whether built by
us or not) benefits all of humanity, we’ll need to ensure that AGI is
deployed safely and securely; that society is well-prepared for its
implications; and that its economic upside is widely shared.”
2. Ethical Use By Customers
We will work hard to ensure that our customers are using our technology
in ethical, responsible ways. We will endeavour to build “ethical use”
clauses into all of our terms and conditions, which will allow us to
suspend services and revoke the use license to those who fail to comply.
Again, OpenAI led the way here, restricting their models to an
API, and suspending services of those it deemed to be in non-compliance
of terms. Whether or not this is in itself ethical (or equitable) is a
different conversation.
3. Work toward an industry-wide track & trace system
We will work to collaborate with major platforms, operators and others
to create an industry-wide, standardized track and trace system (e.g. a
digital watermark system) to allow users and vendors to detect/be
alerted to synthetic media in all its forms. Until that exists, we will
work to ensure that all uses of our technology are clearly marked or
understood as synthetic. Our license agreements will permit the addition
of such watermarks in a way that will not interfere with the content.
Watermarking for audio and video has been available for many years (link1, link2 PDF, or the audio version, Cinavia), and is easily transferable to synthetic media and virtual beings. However, this is just not possible to apply to text. As of Mar/2021,
the GPT-3 language model is producing 52,000 words/sec, or the
equivalent of a new US public library of new content every day. There is
just no available mechanism to mark or track and trace text.
4. Avoid contentious areas
We will not knowingly license the use of our platform to political
parties. Nor will we knowingly work with pornography publishers or
terrorist organizations, gun or arms manufacturers. Should we discover
that such organizations are leveraging our technology, we will do
everything legally within our power to suspend services.
This clause is a minimum, and organisations would be advised to add ‘no selling’, as in Synthesia.io’s T&Cs – Prohibited Uses:
“You agree not to use [any avatar]… To create Content for “promoted”,
“boosted” or “paid” advertising on any social media platform or similar
media, without explicit permission from us.”
5. Ensure Moderation
Provided we are legally allowed, we will conduct random audits of both
original and generated materials that use our technology, where legally
and technically possible. We will do this to ensure that the material
and created output are consistent both with our values as well as
emerging standards and policies from governments and regulators.
Similar to clause 3, this is a cause for concern. Who are the
auditors? Who decides the ‘values’? Who decides the consistency? What
oversight is in place?
6. Treat talent fairly and transparently
Where an actor is visible or audible, we do our best to ensure that our
contract with them respects their privacy and consent, in line with
existing industry standards and expectations. In certain cases, there is
a need for talent behind the scenes, as drivers for synthetic output,
either from us or our licensees. In such cases, we will do our best to
require that the performers consent to their performances being used in
this way, are fairly paid, and informed of the distribution of their
performances.
.
7. Improve public awareness
Through a content program, we will educate the public about how our
technology, and synthetic media in general, works and how to spot its
use.
The focus on public awareness across the field of AI is severely
lacking, and this is an urgent and vital objective. At the moment (in
2021-2022), the general public relies on perceptions from outdated media
(Hollywood movies), and grossly outdated technology (Siri, Alexa).
8. Make sure our datasets are unbiased
We will strive to train our platform with data sets that are diverse and
do not favor any particular ethnicity, age or community.
This is a noble and lofty goal, but unrealistic at this stage. Even when datasets are diverse,
any attempt to filter or censor (as in the case of EleutherAI removing
Literotica and the US Congressional minutes) will reduce the quality of
the AI, and the subsequent output. There is incredible work and time
being invested by major AI labs with a focus on removing bias, including
Facebook AI, Allen AI, and Anthropic.
9. Respect copyright
We will contractually require our licensees to have the proper rights,
including processing rights, to all the source material including
images, audio and video involved in any generated content.
.
10. Cooperate with regulators
We will cooperate with appropriate regulatory and non-governmental
bodies for mutual dialogue about ethical development and deployment of
our tech.
This is a valuable addition to an ethics pledge, but may
necessitate an entire team to keep up with the ethical guidance notes,
documents, papers, policies, procedures, processes, and laws (more
broadly), from many diverse organisations. For an overview of
country-specific guidance, it can be helpful to view the thousands of
pages across hundreds of guidance notes around the world offered by AiLab Artificial Intelligence Research – National Artificial Intelligence Strategies.
This is a valuable addition to an ethics pledge, but may necessitate an entire team to keep up with the ethical guidance notes, documents, papers, policies, procedures, processes, and laws (more broadly), from many diverse organisations. For an overview of country-specific guidance, it can be helpful to view the thousands of pages across hundreds of guidance notes around the world offered by AiLab Artificial Intelligence Research – National Artificial Intelligence Strategies.
Get The Memo
by Dr Alan D. Thompson · Be inside the lightning-fast AI revolution.Hundreds of paid subscribers. Readers from Microsoft, Tesla, & Google AI.
Artificial intelligence that matters, as it happens, in plain English.
Get The Memo.
Dr Alan D. Thompson is an AI expert and consultant.
With Leta (an AI powered by GPT-3), Alan co-presented a seminar called ‘The new irrelevance of intelligence’ at the World Gifted Conference in August 2021.
His applied AI research and visualizations are featured across major international media, including citations in the University of Oxford’s debate on AI Ethics in December 2021.
He has held positions as chairman for Mensa International, consultant to GE and Warner Bros, and memberships with the IEEE and IET.
He is open to consulting and advisory on major AI projects with intergovernmental organisations and enterprise.