Discussion about this post

User's avatar
Lucy's avatar

Thank you for putting some of my feelings into words, Zilan! Many instances when I mention I want to do “China-related work” people send you to the export control corner

Expand full comment
Ze Shen's avatar

I agree with your post, but I also get why export controls are often associated with AI safety, particularly the 'AI notkilleveryoneism' part of AI safety.

So my understanding of the thesis is that we will create AGI, and AGI will kill everyone. What can you do about it? You can either solve alignment and make sure AGI doesn't kill everyone, or prevent AGI from being created until the former happens. What makes AGI happen sooner than later? Race dynamics. More specifically, if two or more entities are close to the frontier, then they will race harder. If there's only one entity at the frontier and everyone else lags behind, then they can chill (relatively). If you see 'US' and 'China' as meaningful entities, then slowing down the number 2 (i.e. China) could meaningfully slow down AI progress, and it would directly contribute to this version of AI safety by not accelerating the race towards AGI.

But different actors have different agenda, and national security folks have increasingly been part of the conversation, so this argument has picked up a flavour of "let's slow down China because US needs to remain the best country in the world".

I guess the root of the problem is that as with any pre-paradigmatic field, the field of 'AI Safety' is ill-defined and nebulous, and different people have different visions of the field, so certain things gets lumped under the field and jeopardizes all sorts of other things.

Expand full comment
3 more comments...

No posts