Jan 12, 2022

Jan 12, 2022

Jan 12, 2022

Something about space

Something about space

Something about space

Tech leaders' Open Letter proposed a pause on ChatGPT. But researchers already know how to make artificial intelligence safer.
Tech leaders' Open Letter proposed a pause on ChatGPT. But researchers already know how to make artificial intelligence safer.
Tech leaders' Open Letter proposed a pause on ChatGPT. But researchers already know how to make artificial intelligence safer.
Purple Flower
Purple Flower
Purple Flower

Adding content 01 back.

Adding qoute back.

Something about space goes here. Mars we're coming soon. 2030.

Tech leaders' Open Letter proposed a pause on ChatGPT. But researchers already know how to make artificial intelligence safer.
Tech leaders' Open Letter proposed a pause on ChatGPT. But researchers already know how to make artificial intelligence safer.
Tech leaders' Open Letter proposed a pause on ChatGPT. But researchers already know how to make artificial intelligence safer.

Another crucial step toward safety is collectively rethinking the way we create and use AI. AI developers and researchers can start establishing norms and guidelines for AI practice by listening to the many individuals who have been advocating for more ethical AI for years. This includes researchers like Timnit Gebru, who proposed a “slow AI” movement, and Ruha Benjamin, who stressed the importance of creating guiding principles for ethical AI during her keynote presentation at a recent AI conference. Community-driven initiatives, like the Code of Ethics being implemented by the NeurIPS conference, are also part of this movement, and aim to establish guidelines around what is acceptable in terms of AI research and how to consider its broader impacts on society.

Join the ensemble.
Tell us about your dangerous idea.