Like this: when I’m talking to a voice recognition system, and it’s telling me to say “Yes” or “No.” I find that I’m saying “Yes” in this way that isn’t speaking to be understood by a human; I’m speaking to be understood by a machine. The machine is basically making a concession by allowing you to speak human, and you’re making concessions to speak in machine dialect. You have to imagine how a computer wants to hear you, to make yourself understood.
We are learning, all of us, how to speak system.
He also brings up the interesting account of the stock analysis company, Nanex, naming certain algorithmic behaviours that repeat in patterns (above is the 'Boston Shuffle') drawing analogy between that and the first sailors who named constellations for navigation, unable to fathom properly and thus map the sky
Without Nanex providing us with the names and images of the market, how
would we be able to imagine it? We make stories to understand the world. If
they’re fictional, like the stories of the zodiac, that doesn’t make them any
less important for sailors in understanding where they were.
Some great consideration on the crossroads of systems and technology that brush quite close to, and present expansion on the initial research I did into databasing and fiction.
...what spooks me about algorithms as nature is precisely that they have
no distortion, they have no affordance, there's no purchase on the world they
describe. Their illegible nature is, quite literally, a world without narrative.
There's only a beginning and an end.