For LLMs specifically, or do you mean that goal alignment is some made up idea? I disagree on either, but if you infer there is no such thing as miscommunication or hiding true intentions, that’s a whole other discussion.
Alignment is short for goal alignment. Some would argue that alignment suggests a need for intelligence or awareness and so LLMs can’t have this problem, but a simple program that seems to be doing what you want it to do as it runs but then does something totally different in the end is also misaligned. Such a program is also much easier to test and debug than AI neural nets.
Aligned with who’s goals exactly? Yours? Mine? At which time? What about future superintelligent me?
How do you measure alignment? How do you prove conservation of this property along open ended evolution of a system embedded into above context? How do you make it a constructive proof?
You see, unless you can answer above questions meaningfully you’re engaging in a cargo cult activity.
By in large, the goals driving LLM alignment are to answer things correctly and in a way that won’t ruffle too many feathers. Any goal driven by human feedback can introduce bias, sure. But as with most of the world, the primary goal of companies developing LLMs is to make money. Alignment targets accuracy and minimal bias, because that’s what the market values. Inaccuracy and biased models aren’t good for business.
For LLMs specifically, or do you mean that goal alignment is some made up idea? I disagree on either, but if you infer there is no such thing as miscommunication or hiding true intentions, that’s a whole other discussion.
Cargo cult pretends to be the thing, but just goes through the motions. You say alignment, alignment with what exactly?
Alignment is short for goal alignment. Some would argue that alignment suggests a need for intelligence or awareness and so LLMs can’t have this problem, but a simple program that seems to be doing what you want it to do as it runs but then does something totally different in the end is also misaligned. Such a program is also much easier to test and debug than AI neural nets.
Aligned with who’s goals exactly? Yours? Mine? At which time? What about future superintelligent me?
How do you measure alignment? How do you prove conservation of this property along open ended evolution of a system embedded into above context? How do you make it a constructive proof?
You see, unless you can answer above questions meaningfully you’re engaging in a cargo cult activity.
Here are some techniques for measuring alignment:
https://arxiv.org/pdf/2407.16216
By in large, the goals driving LLM alignment are to answer things correctly and in a way that won’t ruffle too many feathers. Any goal driven by human feedback can introduce bias, sure. But as with most of the world, the primary goal of companies developing LLMs is to make money. Alignment targets accuracy and minimal bias, because that’s what the market values. Inaccuracy and biased models aren’t good for business.
So you mean “alignment with human expectations”. Not what I was meaning at all. Good that that word doesn’t even mean anything specific these days.