Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

ancianita

(36,271 posts)
Sun Nov 19, 2023, 11:02 AM Nov 2023

Keepin' Up with Who's Keeping an Eye on AI -- A Look at How the Future Proofing is Going

Only aware of this reality for the last six years, I'm thankful there have been groups of influential humans looking out for fellow humans' future survival in the face of current big scale human and Earth events.

One is the Future of Life Institute in Cambridge, MA. Since the days of Oppenheimer, it's understandable that such institutes are established to help engineers, scientists, investors, governmental leaders -- humanity -- stand back and look at the ramifications of their work.

FLI's stated mission is to reduce global catastrophic and existential risk from powerful technologies by consolidating this power in the hands of an elite few.[1] FLI's philosophy focuses on the potential risk to humanity from the development of human-level or superintelligent artificial general intelligence (AGI), but has also stated they work to prevent risk from biotechnology, nuclear weapons and global warming.[2]


LFI obtains signings from world tech/science/engineering/business leaders onto their awareness building letters to those who can redirect course for humans.
https://futureoflife.rg/fli-open-letters/

Excerpts from the latest two:

1.
From seven months ago: Pause Giant AI Experiments: An Open Letter -- We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.


2.
From last month: AI Licensing for a Better Future: On Addressing Both Present Harms and Emerging Threats --
This joint open letter by Encode Justice and the Future of Life Institute calls for the implementation of three concrete US policies in order to address current and future harms of AI.


Dear Senate Majority Leader Schumer, Senator Mike Rounds, Senator Martin Heinrich, Senator Todd Young, Representative Anna Eshoo, Representative Michael McCaul, Representative Don Beyer, and Representative Jay Obernolte,

As two leading organizations dedicated to building an AI future that supports human flourishing, Encode Justice and the Future of Life Institute represent an intergenerational coalition of advocates, researchers, and technologists. We acknowledge that without decisive action, AI may continue to pose civilization-changing threats to our society, economy, and democracy.

At present, we find ourselves face-to-face with tangible, wide-reaching challenges from AI like algorithmic bias, disinformation, democratic erosion, and labor displacement. We simultaneously stand on the brink of even larger-scale risks from increasingly powerful systems: early reports indicate that GPT-4 can be jailbroken to generate bomb-making instructions, and that AI intended for drug discovery can be repurposed to design tens of thousands of lethal chemical weapons in just hours. If AI surpasses human capabilities at most tasks, we may struggle to control it altogether, with potentially existential consequences. We must act fast...

Encode Justice and the Future of Life Institute stand in firm support of a tiered federal licensing regime, similar to that proposed jointly by Sen. Blumenthal (D-CT) and Sen. Hawley (R-MO), to measure and minimize the full spectrum of risks AI poses to individuals, communities, society, and humanity. Such a regime must be precisely scoped, encompassing general-purpose AI and high-risk use cases of narrow AI, and should apply the strictest scrutiny to the most capable models that pose the greatest risk. It should include independent evaluation of potential societal harms like bias, discrimination, and behavioral manipulation, as well as catastrophic risks such as loss of control and facilitated manufacture of WMDs. Critically, it should not authorize the deployment of an advanced AI system unless the developer can demonstrate it is ethical, fair, safe, and reliable, and that its potential benefits outweigh its risks.

We offer the following additional recommendations:

A federal oversight body, similar to the National Highway Traffic Safety Administration, should be created to administer this AI licensing regime. Since AI is a moving target, pre- and post-deployment regulations should be designed with agility in mind.

Given that AI harms are borderless, we need rules of the road with global buy-in. The U.S. should lead in intergovernmental standard-setting discussions. Events aimed at regulatory consensus-building, like the upcoming U.K. AI Safety Summit, must continue to bring both allies and adversaries to the negotiating table, with an eye toward binding international agreements. International efforts to manage AI risks must include the voices of all major AI players, including the U.S., U.K., E.U., and China, as well as countries that are not developing advanced AI but are nonetheless subject to its risks, including much of the Global South.

Lawmakers must move towards a more participatory approach to AI policymaking that centers the voices of civil society, academia, and the public. Industry voices should not dominate the conversation, and a concerted effort should be made to platform a diverse range of voices so that the policies we craft today can serve everyone, not just the wealthiest few.


Encode Justice, a movement of nearly 900 young people worldwide, represents a generation that will inherit the AI reality we are currently building....


This might help explain why all 20 attendees of Schumer's Forums raised their hands when he asked who agreed that their enterprises should be federally regulated.

One concern: It's not known to what extent these institute groups influence the U.S. Department of Defense. But one could bet that President and Commander-in-Chief Biden didn't meet with President Xi Jinping to only get agreements on fentanyl.













2 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
Keepin' Up with Who's Keeping an Eye on AI -- A Look at How the Future Proofing is Going (Original Post) ancianita Nov 2023 OP
Can parents be held responsible for the actions of their children? RainCaster Nov 2023 #1
The analogy sounds good, but doesn't really apply here, because ancianita Nov 2023 #2

RainCaster

(10,979 posts)
1. Can parents be held responsible for the actions of their children?
Sun Nov 19, 2023, 11:33 AM
Nov 2023

If so, those who design and train AI should be responsible for the output of that AI.

ancianita

(36,271 posts)
2. The analogy sounds good, but doesn't really apply here, because
Sun Nov 19, 2023, 11:50 AM
Nov 2023

the collective work of an AI company is a) owned by the company, and b) shared responsibility -- not just by employees and management, but by share investors. A closer parallel might be that those who funded Jan 6 are as responsible as those who carried it out.

Usually, an investor-elected board is responsible. But if AI goes sideways, they wouldn't be HELD responsible because
a) would there even be enough time to end its loss, harm & damage, nevermind
b) take that board to court?
There would have to be criminal laws in place in the first place, to make any AI board legally responsible. So far there are none.

So while the 118th congress isn't working on that, Biden is. For now.

Our part is to vote in a 119th Congress that will do what the 1l7th got done under Pelosi/Schumer. We've got to assert the will of The People, because right now, Ohio and other states are trying to nullify the will of The People.

Kick in to the DU tip jar?

This week we're running a special pop-up mini fund drive. From Monday through Friday we're going ad-free for all registered members, and we're asking you to kick in to the DU tip jar to support the site and keep us financially healthy.

As a bonus, making a contribution will allow you to leave kudos for another DU member, and at the end of the week we'll recognize the DUers who you think make this community great.

Tell me more...

Latest Discussions»Editorials & Other Articles»Keepin' Up with Who's Kee...