DALL-E prompt: The funniest AI image ever. Credit: DALL-E
Alice Trend, Liming Zhu and Qinghua Lu -- CSIRO
Nov. 24, 2023
Artificial intelligence is so hot right now.
ChatGPT, DALL-E, and other AI-driven platforms are providing us with completely new ways of working. Generative AI is writing everything from cover letters to campaign strategies and creating impressive images from scratch.
Funny pictures aside, real questions are being asked by international regulators, world leaders, researchers, and the tech industry about the risks posed by AI.
AI raises big ethical issues, partly because humans are biased creatures. This bias can be amplified when we train AI. Poorly sourced or managed data that lacks diverse representation can lead to active AI discrimination.
We've seen bias in police facial recognition systems, which can misidentify people of color, or in home loan assessments that disproportionally reject certain minority groups. These are examples of real AI harm, where appropriate AI checks and balances have not been assessed before launch.
AI-generated misinformation like hallucinations and deepfakes are also top of mind for governments, world leaders, and technology users alike. No one wants their face or voice impersonated online. The big question is: how can we harness AI for good while preventing harm?
(more)
READ MORE: TechXplore