Zach Faddis is a user on refactorcamp.org. You can follow them or interact with them if you have an account anywhere in the fediverse. If you don't, you can sign up here.
Zach Faddis @zacharius

We can only pretend to have free will in so much as our future actions are unpredictable.

The offensive approach to being more free is to become less predictable in our thoughts, actions, and growth patterns.

The defensive approach is to make ourselves less legible to our environment. So that it can't anticipate and preemptively respond to us.

This would mean that freedom is contextual, and depends as much on our environment as on us, which is a scary thought.

· Web · 4 · 7

When using products/services that collect data and use it to predict future actions, we are essentially paying with agency, even if it is ostensibly free.

Someday enough data will be collected that a machine learning algorithm will invent(discover?) psychohistory. At that exact moment it can be said that the human race has lost its free will.

@zacharius It strikes me that this is a reformation of a well-defined problem in medieval philosophy, namely the relationship between God’s foreknowledge and free will. Does foreknowledge mean that human agency doesn’t exist?

@tumba The toot is definitely predicated on that belief.

It seems right to me that a being having perfect knowledge of the future would necessitate the absence of free will. But part of me wants to rebel against that argument. It makes me uncomfortable.

Is perfect knowledge of something all that is needed to perfectly predict future action, or is it possible for a conscious being to defy expectations despite perfect knowledge?

Almost feels like I'm asking if there's a soul.

@zacharius
Seems to me that

1) such a 'discovery' would only destroy the *illusion* of free will. If it's discoverable, then it was always there, it was just illegible.

2) such a theory would only be applicable to a sufficiently large and complex grouping of humans, so individual free will (or at least the illusion of it) would be preserved.

@sgharvey @zacharius we also have chaotic effects. Seeing humans as input-output systems can be useful for some types of interactions, but most data is impossible to quantify at best. This is further complicated by changing ontologies from people trying to preserve their sense of free will

@zacharius defensive: increasing noise to signal ratio (either generate noise or inhibit signals) vs offensive: show real signal with high individual variability