We can only pretend to have free will in so much as our future actions are unpredictable.
The offensive approach to being more free is to become less predictable in our thoughts, actions, and growth patterns.
The defensive approach is to make ourselves less legible to our environment. So that it can't anticipate and preemptively respond to us.
This would mean that freedom is contextual, and depends as much on our environment as on us, which is a scary thought.
When using products/services that collect data and use it to predict future actions, we are essentially paying with agency, even if it is ostensibly free.
Someday enough data will be collected that a machine learning algorithm will invent(discover?) psychohistory. At that exact moment it can be said that the human race has lost its free will.
@tumba The toot is definitely predicated on that belief.
It seems right to me that a being having perfect knowledge of the future would necessitate the absence of free will. But part of me wants to rebel against that argument. It makes me uncomfortable.
Is perfect knowledge of something all that is needed to perfectly predict future action, or is it possible for a conscious being to defy expectations despite perfect knowledge?
Almost feels like I'm asking if there's a soul.
Seems to me that
1) such a 'discovery' would only destroy the *illusion* of free will. If it's discoverable, then it was always there, it was just illegible.
2) such a theory would only be applicable to a sufficiently large and complex grouping of humans, so individual free will (or at least the illusion of it) would be preserved.